Ziad Nawar, MSc
Data Scientist
Passionate and collaborative computer science professional with expertise in software development, machine learning, and statistical methods. Able to lead a diverse team and deliver successful projects with demonstrated experience. Possesses research skills and a strong foundation in AI/ML, and software development, eager to contribute to innovative projects and further develop leadership abilities.
About Me
I'm a Data Scientist with a Master's degree in Artificial Intelligence from TU Delft. My expertise spans across machine learning, natural language processing, computer vision, and full-stack development. I'm passionate about leveraging AI and data science to solve complex problems and create innovative solutions.
Currently working at Datacation, I focus on developing AI solutions and data science applications. Previously, I've worked with companies like Artefact and Grant Thornton, where I've led projects involving causality analysis, time-series forecasting, and full-stack development using modern technologies.
Experience
Amsterdam Zuid, Netherlands · Hybrid
Sep 2024 - Present · 1 year 3 months
Datacation is a AI and data science consultancy company, currently in the startup phase.
Utrecht, Netherlands · Hybrid
Sep 2023 - Sep 2024 · 1 year
Artefact is a consultancy company specialised in AI and digital marketing.
- •Collaborated within a team to analyze marketing performance for a multi-billion-dollar retail client in the US market, leveraging causality analysis and time-series forecasting techniques.
- •Led the end-to-end development and deployment of a Bayesian-based ranking model for a popular e-sports game, including research, planning, implementation, and delivery.
- •Mentored an intern over a 5-week period, providing technical guidance and ensuring timely completion of an internal solution aligned with project goals and deliverables.
Gouda, South Holland, Netherlands · Hybrid
Nov 2020 - Sep 2023 · 2 years 10 months
Grant Thornton is a global audit, tax, and advisory firm.
- •Collaborated with non-technical colleagues, translating business challenges into actionable technical solutions.
- •Delivered multiple successful full-stack projects using Outsystems, including the Tax Reporting System.
- •Designed and implemented a customized surveying system, enhancing workflows.
- •Developed systems for financial document parsing and invoice generation automation.
Netherlands · Remote
May 2020 - July 2020 · 2 months
ScenWise is a software development company works in the mobility sector.
- •Contributed to a traffic data analysis web application back-end using Java, REST API, Spring framework, and database technologies.
Projects
Research
This thesis addresses the issue of reliability in machine learning (ML) systems used for computer vision applications, which can fail when faced with slightly different data than their training set. The author proposes a system to help ML practitioners debug their computer vision models before deployment. The system utilizes human computation operations to identify expected model behaviors, compare them to actual model behaviors, and assess model performance. The author claims that their system is the first to allow ML practitioners to define debugging goals, configure debugging sessions, and automatically generate model debugging reports. They conducted a comprehensive evaluation of the system, demonstrating its correctness, informativeness, and cost-effectiveness, even when considering potential human errors. Despite some limitations, the work represents an important step towards assisting practitioners in debugging ML models, encouraging further optimization, and providing open access to their code for broader usage and experimentation.
View PaperThe paper discusses the collaboration between humans and digital computers in various tasks, emphasizing the importance of combining artificial intelligence (AI) agents with humans to improve team effectiveness. The focus is on the human-AI agent team and the role of shared mental models in team performance. The paper aims to experimentally analyze how different shared mental models impact human-AI agent collaboration. The hypothesis is that exchanging more information within the shared mental model between humans and AI agents leads to higher team performance. The paper outlines the research questions, subdivides the research problem, and describes the experimental design. It also presents results and statistical analyses, touches on ethical considerations, and provides a discussion of the experiment's outcomes and future research recommendations before concluding.
View Paper