Yonah Welker Headshot
Report a problem with this profile
[email protected]

Yonah Welker      

Explorer, Public Evaluator, Board Member at Yonah.ai; Emerging Tech & Artificial Intelligence Expert

Yonah Welker is a technologist, public expert, and voice for the development of algorithms and policies addressing society, human capacity, and designated groups and spectrums, such as social and human-centered AI, robotics, emerging learning, work, health, and city solutions.

Welker has made over a hundred appearances and published commentary to bring awareness to social technologies and policies, including AI and Digital Acts, intergovernmental treaties, ontologies of assistive systems and algorithms, curated and being an ambassador for ethics initiatives (e.g. Global AI Summit for the Good of Humanity, humanitarian projects), providing commentary and consultations for the public and authorities (e.g. authorities of AI and Data, Telecommunication, Economic and Social Development).

Welker’s commentary, contributions and work have been featured by the White House PCAST, the World Economic Forum, OECD, UNESCO, World Health Organization, and others.

Speech Topics


Algorithmic Diversity - Neurodiversity Inclusive AI & Human Rights

During the live session, we will dive into the meaning and application of algorithmic diversity in technology (including such cases as neurodiversity). Using the latest experiences, cases and research, we will analyze the current state and problems of inclusive innovation and technology, including the problems of representation and criteria, inclusive research and design-thinking, the building of inclusive products (AI-driven platforms, devices, apps, social and emotional robotics), ethical considerations and concerns (the "black-box" and "double-check" problems, transparency, explainability, fairness, surveillance), shortcomings of current technology ecosystems, policies and human rights frameworks.

Algorithmic Diversity - Disability Inclusive AI & Human Rights

Applying a disability equity lens to the ML pipeline. Algorithms can perpetuate societal inequities and cultural prejudices. Bias can enter at different stages due to: Priorities that do not include (dis)ability equity considerations Organizational structure, unconscious and conscious bias from the team developing the algorithm.

Accessible Nations, Policies & Human RIghts Framework

At least 1 in 6 people living with one or more neurological conditions, 1 in 7 are neurodivergent, 1 in 4 adults — suffers from a diagnosable mental disorder in a given year. All of these differences are connected to the spectrum of ability and comorbidity, sensibility, physical and tactile experiences, visual and color experiences, differences in the systems of learning, memorizing, systemizing, and empathizing. These “invisible disabilities” lead to the biggest level of social exclusion, isolation and rejection across all conditions and states. Unemployment among those with autism is approximately 85%, severe mental health disorders – 68%-83%, Down syndrome – 43%. Social, economic and environmental parameters add even more criteria to this analysis that are distributed across different social layers, subgroups, communities.

How can nations, cities and ecosystems embrace human-centered algorithms, environments, policy and emerging technology – specifically addressing human capability, wellbeing, experiences and beyond?

Related Speakers View all


More like Yonah