Zaher JoukhadarPersonal Webiste
.01

ABOUT

PERSONAL DETAILS
The University of Melbourne, Parkville
zjoukhadar@unimelb.edu.au
Hello. I am a Scientific Software Engineer, Team Leader and Artificial Intelligence enthusiast

BIO

ABOUT ME

A native of Aleppo, Syria. I hold a bachelor degree from the University of Aleppo, Syria and master degree from the University of Melbourne, Australia. I am currently working as Lead Software Engineer at Microsoft Research Center for Social Natural User Interfaces (SocialNUI), at the University of Melbourne.

At Microsoft SocialNUI I build cool technologies, bridging the gap between humans and computers, animals and computers, and humans and animals. I manage SocialNUI Lab and oversee all software development in the center.

Before joining Microsoft SocialNUI I used to work as AI Research Team Leader at IDscan Biometrics LTD, where we strove to make the world safe place. At IDscan we did the hard work of authenticating people IDs and passports and detecting identity frauds.

Off the digital grid, I enjoy spending quality time with family and friends. I like to take photos using my Nikon d5500 and I am also a fan of Sci-Fi and action movies.

.02

RESUME

ACADEMIC AND PROFESSIONAL POSITIONS
  • 2015
    THE UNIVERSITY OF MELBOURNE

    LEAD SOFTWARE ENGINEER

    MICROSOFT RESEARCH CENTER FOR SOCIAL NUI

    Microsoft SocialNUI is an academic-industry research center at the University of Melbourne. Research in SocialNUI focuses on the social aspects of human-computer interactions (HCI) offered by emerging technologies. I oversee all software development in the center. I design, develop and deploy software to support SocialNUI research. I also manage SocialNUI lab where we have several master and PhD students. I am involved in many of SocialNUI projects and I work with some of our PhD students.
  • 2011
    TURKEY

    AI RESEARCH TEAM LEADER

    IDSCAN BIOMETRICE LTD

    I joined IDscan in 2011 when it was in its early stages. I joined to lead the effort of building one of the company core software “Document Recognition Engine”. I led a team of eight research engineers to build a software library to recognize and authenticate scanned images of identification cards and passports. The library later became IDscan core capability. Most of IDscan products now uses our library. We also built a version of the library for mobile devices that is able to recognize images of structured and unstructured documents taken with the phone camera. We achieved outstanding results. IDscan was later acquired by GBG PLC in June 2016.
  • 2013
    THE UNIVERSITY OF MELBOURNE

    RESEARCH ASSISTANT

    MELBOURNE NETWORKED SOCIETY INSTITUTE (MNSI)

    MNSI is interdisciplinary research institute at the university of Melbourne, that focuses on researching the connectivity between people, places and things to drive innovation in the network society. I joined MNSI as an intern to work on collaborative project between MNSI and Melbourne School of Health Sciences. The project aims to provide innovative speech pathology services for children with Autism Spectrum Disorder (ASD) and their families in rural areas. I developed an automated tracking and analysis system which provides meaningful statistics based on the quality of the parent-child interaction with dashboard display that takes the output of the Kinect sensor and displays both real-time and cumulative measurements alongside avatar skeleton figure
  • 2011
    2012
    ALEPPO

    TEACHING ASSISTANT

    THE UNIVERSITY OF ALEPPO

    I was appointed to this position because I was the top ranked student in my class. Some of the subjects I taught were: data mining, expert systems, computer vision, queuing theory, artificial intelligent fundamentals, and artificial Neural Networks.
EDUCATION
  • 2013
    MELBOURNE

    INFORMATION TECHNOLOGY (COMPUTING) - MASTER

    UNIVERSITY OF MELBOURNE

    Artificial intelligence, applied algorithmics, data Mining and software design.
  • 2010
    2011
    STOCKHOLM

    INFORMATION SYSTEM MANAGEMENT

    KTH, ROYAL INSTITUTE OF TECHNOLOGY

    Knowledge management, machine learning, and web-mining.
  • 2007
    2008
    MELBOURNE

    COMPUTER SCIENCE

    LA TROBE UNIVERSITY

    Intelligent systems engineering, intelligent multimedia systems and entrepreneurship in I.T.
  • 2004
    2010
    ALEPPO

    INFORMATICS ENGINEERING - BACHELOR

    THE UNIVERSITY OF ALEPPO

    Natural languages processing, expert systems, artificial neural networks, and computer vision
HONORS AND AWARDS
  • 2013
    Melbourne, Australia

    Althuraya Foundation Scholarship

    Althuraya Foundation

    I was granted the scholarship to do my master at The University of Melbourne
  • 2008
    2009
    Stockholm, Sweden

    Erasmus Mundus Scholarship

    COMPETITIVE AWARD FOR ACADEMIC EXCELLENCE

    This scholarship was for the year I spent studying at KTH Royal Institute of Technology in Stockholm
  • 2007
    Melbourne, Australia

    Endeavour Award

    Dept. of Education Science and Training in Australia

    I got the scholarship to spend one year as exchange student at Latrobe University when I was undergraduate student.
  • 2006
    2010
    Aleppo

    University of Aleppo Award for Academic Superiority

    The University of Aleppo, Syria

    I received this award four times in the years: 2006, 2007, 2009, and 2010. I was the top student in my class in each of these years.
.04

PROJECTS

Highlighted Projects
ARTIFICIAL INTELLIGENCEBIOMETRICSIMAGE PROCESSING

IDscan Document Expert System

IDscan Document Expert System

Project team : [Zaher Joukhadar, Moneer Allito, Mohammad Heskol, Mohammad Sakher Sawan, Saleh Hananno, Ahmad Sabbagh, Muhammed Rida Katby, Muhammed Draow, Osama Makansi, Anas Khayata, Humam Helfawi, Hazem Abdullah, and Mustafa Jablawi]

Project overview:

IDscan document expert system (IDES) is a software library used in most of IDscan products. The system can automatically recognise images of any government-issued document in the world such as identification cards and passports. The system can extract and label personal information printed on the document using state of the art OCR engine built by us. The system can also authenticate the document image using a set of UV, IR, MRZ, RFID chip related checks, it can tell whether the document is fake, and provide a measure of confidence related to the genuineness of documents.

IDES accepts images taken by any type of imaging device such as scanners, cameras, and mobile phones. Thanks to another engine called IDscan Document Extraction Engine also developed by us, IDES can locate and extract the document within any given image and process it.

IDES was built using C#, C++ programming languages, and advanced machine learning and image processing algorithms.

My role in the project:

I led the entire effort to build the system, I designed and developed the underlying machine learning and image processing algorithms. I developed the first prototype (proof of concept) of the system that led to establishing the Research Team in IDscan to pursue developing the sysetm.

Patents:

One patent has been published and several are in progress here is a link to the one that was published: – note IDscan did not include the names of any team members when they filed the patent-

 

 

 

ARTIFICIAL INTELLIGENCEIMAGE PROCESSINGNatural Language Processing

IDscan Semantic Engine

IDscan Semantic Engine

Project Team: [Zaher Joukhadar, Moneer Allito, Anas Khayata, Osama Makansi, Raniem Arour, Baraa Abo Helal, and Meltem Cetiner]

Project Overview:

IDscan Semantic Engine is a software library used in some of IDscan products. The system can recognise and semantically extract “useful” information from scanned images of any document type. What is unique about IDscan Semantic Engine is the capability of recognising any document in any format, the document need not to be structured nor have fixed layout. Semantic engine uses advanced OCR technology and Natural Language Processing to understand the document content and classify/extract useful pieces of information.

Semantic Engine is currently being used by many banks in the world to perform their KYC processes and process large sets of legacy scanned documents. It has proved its efficiency and uniqueness in automatically extracting full names, addresses, currencies, and tables.

Semantic Engine was built using C#, C++ programming languages, and advanced image processing, natural language processing algorithms.

My role in the project:

I helped in conceiving, exploring, and elaborating the project idea. I led the initial effort to build the system, and then was involved intermittently in developing and leading the effort in the project.

Patents and Awards:

Several patents applications are in progress, I will list them when they are published.

The project received a financial award from the Turkish government.

AUGMENTED REALITY

Augmented Learning Environment for Physiotherapy Education (Augmented Studio)

Augmented Learning Environment for Physiotherapy Education (Augmented Studio)

Project Team: [Thuong Hoang, Martin Reinoso, Zaher Joukhadar, Frank Vetere, and David Kelly]

Project Overview:

Physiotherapy students often struggle to translate anatomical knowledge from textbooks into a dynamic understanding of the mechanics of body movements in real life patients. Augmented Studio is an augmented reality system that uses body tracking to project anatomical structures and annotations over moving bodies for physiotherapy education. Augmented Studio enables augmentation through projection mapping to display anatomical information such as muscles and skeleton in real time on the body as it moves. We created a technique for annotation to create projected hand-drawing on the moving body, to enable explicit communication of the teacher’s clinical reasoning strategies to the students. Augmented Studio helped facilitating a more engaging learning and teaching experience and increased communication between teacher and students.

Augmented Studio uses several projectors, Microsoft Kinect to track the human body movements, and Microsoft RoomAlive Toolkit for spatial mapping. We combine spatial mapping information with Kinect body tracking to transform images projected by the projectors so they correctly align in real time on the moving human body.

My Role in The Project:

I developed the first prototype using Microsoft RoomAlive, Microsoft Kinect, and Unity3D

IMAGE PROCESSINGMICROSOFT KINECT

Kinecting With Orangutans

Kinecting With Orangutans

Project Team: [Sarah Webber, Marcus Carter, Zaher Joukhadar, and Sally Sherwen]

Project Overview:

This project is a collaboration between The University of Melbourne’s Microsoft Research Centre for Social Natural User Interfaces (SocialNUI) and Zoos Victoria.

The project aims to study how digital enrichment can improve the welfare of Melbourne Zoo’s orangutans. This project utilises UNI technologies to give the orangutans a wider range of ways to interact with technology by offering them more active, full body movement.

The installation consists of a Kinect v2 [Kinect for Xbox One] sensor, a short range projector, and a PC, The projector projects a game on the floor inside the orangutan enclosure; the Kinect sensor, with its superior depth sensing and 3D mapping technologies, detects touches on the projected surface and sends data to the game. By using the Kinect for Windows SDK 2.0 we developed software to receive and process the raw depth data, which we  use to detect and track touches on the projected surface.

The Kinect does not need to be placed looking straight down to the projected surface, It can be placed anywhere, looking at any angle, around the projected surface, provided it always maintains a line of sight to the whole projected surface and at a distance no more than 4.5 meters to the farthest point on the projected surface.

My Role in The Project:

I led the entire technical and software development side of the project. I developed the software that detects touches on projected area using Microsoft Kinect, I also developed all the games, I used C#, nodejs, Processing programming languages.

Media:

The project attracted large media attention, several newspapers wrote about it, in addition of few TV channels, here are some examples:

Machine LearningMICROSOFT KINECT

Kinect Technology for Remote Assessment of Interventions for Young Children with Autism Spectrum Disorders

Kinect Technology for Remote Assessment of Interventions for Young Children with Autism Spectrum Disorders

Project team members: [Zaher Joukhadar, Ken Clarke, Robyn Granett, Tricia Eadie, and Bronwyn Davidson]

Project overview:

The project aims to provide innovative speech pathology services for children with Autism Spectrum Disorder (ASD) and their families in rural areas. The Kinect sensor plays a key role, being used as a novel remote feedback and assessment tool for the quality of parent-child interactions.

This project is developing an automated tracking and analysis system. The software provides meaningful statistics based on the quality of the parent-child interaction with a prototype dashboard display developed that takes the output of the Kinect sensor and displays both real-time and cumulative measurements alongside avatar skeleton figures. Measurements include head-height offset, proximity, number and position of touches, voice recognition, real-time static pose recognition, as well as a rudimentary overall ‘Q’ factor for the session.

It is envisaged that as the speech-pathology intervention progresses the automatically generated quality factors from the toolbox will show a marked improvement. This would also independently validate the intervention process. The toolbox could be used in future iterations as an ‘expert system’ that would provide speech-pathology support in areas that are typically underserved.

My Role in The Project:

I led the entire technical and software development side of the project. I developed machine learning algorithms to detect and recognise the child and caregiver movements. I build the software using C# programming language.

IMAGE PROCESSINGMICROSOFT KINECT

Encounters: Collective Bodies, Creative Spaces

Encounters: Collective Bodies, Creative Spaces

Project Team: [John Downs, Zaher Joukhadar, Travis Cox, Diego Giraldes, Manoel Ribeiro, Juliano  Rotta, and others]

Project Overview:

Encounters is an interactive digital art installation, it was collaboration between Victorian College of Arts (VCA) and Microsoft SocialNUI. Encounters was a part of Melbourne’s SummerSalt Festival and White Night 2015.

We created a large installation called Encounters that explored the world of human-computer interaction. The project was created in highly collaborative manner, there were 31 people in the team all came from wide range of technical and art backgrounds including visual arts, composition, dance, lighting, interactivity, computer vision, and computer science.

The installation consist of a Microsoft Kinect placed over-head. We built software to track people while dancing, the software can recognise several types of movements such as walking, running, crouching, and jumping. As the audience entered the space Microsoft Kinect sensors detected their presence, triggering sounds and lighting effects based on not only what participants were doing but what they’re doing relative to everybody else. Every audience member was represented on the screen. Particular actions will change that representation. So if audience members jumped or came together with other people the way they were represented changed.  Approximately 1200 people went through the installation over four Saturday nights.

My Role in The Project:

I was responsible for the technical and software development side of the project, I worked with four of SocialNUI colleagues and interns on developing Kinect based software to detect people dancing and recognise their dance movements, the software sends data using OSC to the other parts of the installation such as visuals, lighting, and audio.

Media:

WEB TECHNOLOGIES

Wearable Technology for Arm Monitoring in Health (Armsleeve)

Wearable Technology for Arm Monitoring in Health (Armsleeve)

Project Team: [Justin Fong, Frank Vetere, Thuong Hoang, Zaher Joukhadar, Bernd Ploderer, and others]

Project Overview:

The aim of this project is to develop wearable technology that tracks arm movement in daily life and visualises such data for physiotherapists. This technology has the potential to assist physiotherapists in the assessment and rehabilitation of stroke patients.

This project will investigate the potential use of natural user interfaces (NUIs) to support stroke patients and their clinicians in the arm rehabilitation process. The project team are designing a low cost wearable technology called ‘ArmSleeve’ to monitor how patients use their affected arm in daily life. The technology will be deployed to patients and therapists to study how the information collected can support therapists in assessing patients and in working with patients on interventions to develop motor skills and to train every day activities, such as putting on clothes.

My Role in The Project:

I developed web-based dashboard to visualise ArmSleeve senors data, I used Microsoft Azure, Nodjs, and D3 to build the website.

.05

CONTACT

Drop me a line

GET IN TOUCH

Simply use the form below to contact me. I am open to coffee invitations and friendly chats :-). Thank you for visiting my personal website.