Prof.
Edward Y. Chang
ACM Fellow / IEEE Fellow
Stanford University, USA
Edward Y. Chang is an adjunct professor of Computer Science
at Stanford University since 2019, and a visiting chair
professor at Asia University. His current research interests
include consciousness modeling, generative AI, and
healthcare. Chang received his MS in CS and PhD in EE, both
from Stanford University. He joined the ECE department of UC
Santa Barbara in 1999, where he was tenured in 2003 and
promoted to full professor in 2006. From 2006 to 2012, Chang
served at Google as a director of research, leading research
and development in areas such as scalable machine learning,
indoor localization, Google QA, and recommendation systems.
In subsequent years, Chang served as the president of HTC
Healthcare (2012-2021) and a visiting professor at UC
Berkeley AR/VR center (2017-2021), working on healthcare
projects including VR surgery planning, AI-powered medical
IoTs, and disease diagnosis. Between 2019 and 2022, Chang
also served at SmartNews, a Tokyo-based unicorn, as its
chief NLP advisor. Chang is an ACM fellow and IEEE fellow
for his contributions to scalable machine learning and
healthcare.
Speech Title: The Future of AI: From Unconscious Models
to Conscious Reasoning and Emotional Intelligence
Abstract: In this presentation, I first highlight the
advancements in foundation models in the past decade, and
present their impact on various domains. I then explore some
known issues such as a lack of robustness, interpretability,
and generalization. It's argued that AI models have
successfully modeled human unconsciousness to a certain
extent. However, to enable AI to think and plan, a
proposition is made [1] to model consciousness on top of
unconsciousness. The discussion begins with the definition
of consciousness, the exploration of the transitional
mechanisms between consciousness and unconsciousness,
theories of consciousness, and the differentiation between
attention and consciousness. A functionalist approach is
adopted to formulate a model of consciousness that
encompasses perception, awareness, attention, critical
thinking, creative thinking, and emotional intelligence. An
illustration is provided on how these consciousness
capabilities can be developed in foundation models through
the careful design of prompt templates. Specifically, how
the Socratic method [2], along with inductive, deductive,
and abductive reasoning, can facilitate critical thinking
and reading via prompt-template engineering is demonstrated.
The integration of emotional intelligence to guide
exploratory thinking, with ethical safeguards that respect
individual and cultural values, is also discussed.
[1] CoCoMo: Computational Consciousness Modeling for
Generative and Ethical AI, Edward Y. Chang, ArXiv:2304.02438
e-print, February, 2023, [PDF].
[2] Prompting Large Language Models With the Socratic
Method, Edward Y. Chang, IEEE 13th Annual Computing and
Communication Workshop and Conference (CCWC), March 2023
[PDF].
Prof. Fakhri Karray
IEEE Fellow
University of Waterloo, Canada
Fakhri Karray is the founding co-director of the University
of Waterloo Artificial Intelligence Institute and is the
Loblaws Research Chair in Artificial Intelligence in the
Department of electrical and computer engineering at the
University of Waterloo, Canada. He is also a Professor of
Machine Learning and the former Provost at the Mohamed bin
Zayed University of Artificial Intelligence (MBZUAI), a
graduate-level, research-based artificial intelligence (AI)
university, in Abu Dhabi, UAE. Fakhri’s research interests
are in the areas of operational AI, cognitive machines,
natural human-machine interaction, and autonomous and
intelligent systems. Applications of his research include
virtual care systems, cognitive and self-aware
machines/robots/vehicles, predictive analytics in supply
chain management and intelligent transportation systems. He
serves as Associate Editor and member of the editorial board
of major publications in smart systems and information
fusion.
His most recent textbook in foundational machine learning
“Elements of Dimensionality Reduction and Manifold Learning”
was published by Springer Nature in February 2023. He was
honored in 2021 by the IEEE Vehicular Technology Society
(VTS) with the IEEE VTS Best Land Transportation Paper Award
for his pioneering work on improving traffic flow prediction
with weather Information in connected cars using deep
learning and AI. His recent work on federated learning in
communication systems earned him and his co-authors the 2022
IEEE Communication Society’s MeditCom Conference Best Paper
Award. Fakhri is a Fellow of the IEEE, a Fellow of the
Canadian Academy of Engineering, a Fellow of the Engineering
Institute of Canada. He served as a Distinguished Lecturer
for the IEEE and a Kavli Frontiers of Science Fellow. Fakhri
received the Ing. Dip degree in electrical engineering from
the School of Engineering of the University of Tunis,Tunisia
and the Ph.D. degree from the University of Illinois
Urbana-Champaign, USA.
Speech Title: Generative vs. Operational Artificial
Intelligence: Opportunities and Challenges
Abstract: The talk presents recent trends and major
advances accomplished lately in the field of Artificial
Intelligence (AI), specifically Operational and Generative
Artificial Intelligence (OAI/GAI). As demonstrated by
impressive accomplishments made in the field (such as
ChatGPT and other generative AI based engines) and due to
fundamental advances made in the field of machine learning
and artificial intelligence, experts are predicting we are
at the cusp of a new technological revolution. It is
expected that AI will grow the world GDP by up to 20% by
2025. This amounts to more than 15 Trillion dollars of
growth over the next few years. These developments have
impacted significantly technological innovations in the
field of Internet of Things, self-driving machines, powerful
chat bots, virtual assistants, human machine intelligent
interface, large language models, real-time translators,
cognitive robotics, virtual care systems, eHealth and
Fintech, to name a few. Although AI constitutes an umbrella
of several interrelated technologies, all of which are aimed
at imitating to a certain degree intelligent human behavior
or decision making, deep learning algorithms are considered
to be the driving force behind the explosive growth of AI
and their applications in almost every scientific and
technological sector: disease diagnosis, remote health care
monitoring, financial market prediction, self-driving
vehicles, social robots with cognitive skills, intelligent
manufacturing, surveillance, cybersecurity, intelligent
transportation systems, to name a few. The talk highlights
the milestones that led to the current growth in AI, OAI and
GAI, the role of academic institutions and discusses some of
the major achievements in the fields. It enumerates as well
real challenges when these innovations are mis-used leading
to potential negative effects on society and end-users.
Prof.
Ling Liu
IEEE Fellow
Georgia Institute of Technology, USA
Ling Liu is a Professor in the School of Computer Science at
Georgia Institute of Technology. She directs the research
programs in the Distributed Data Intensive Systems Lab
(DiSL), examining various aspects of large scale big
data-powered artificial intelligence (AI) systems, and
machine learning (ML) algorithms and analytics, including
performance, availability, privacy, security and trust.
Prof. Liu is an elected IEEE Fellow, a recipient of IEEE
Computer Society Technical Achievement Award (2012), and a
recipient of the best paper award from numerous top venues,
including IEEE ICDCS, WWW, ACM/IEEE CCGrid, IEEE Cloud, IEEE
ICWS. Prof. Liu served on editorial board of over a dozen
international journals, including the editor in chief of
IEEE Transactions on Service Computing (2013-2016), and the
editor in chief of ACM Transactions on Internet Computing
(since 2019). Prof. Liu is a frequent keynote speaker in
top-tier venues in Big Data, AI and ML systems and
applications, Cloud Computing, Services Computing, Privacy,
Security and Trust of data intensive computing systems. Her
current research is primarily supported by USA National
Science Foundation under CISE programs, IBM and CISCO.
Speech Title: Can Federated Learning be Responsible ?
Abstract: Federated learning (FL) is an emerging
distributed collaborative learning paradigm by decoupling
the learning task from the centralized server to a
decentralized population of edge clients. One of the
attractive features of federated learning is its default
client privacy, allowing clients to jointly learn a global
model while keeping their sensitive training data locally
and only share local model updates with the federated
server(s). However, recent studies have revealed that such
default privacy is insufficient for protecting the
confidentiality of client training data and the safety of
the global model. This keynote will describe model leakage
risks and model poisoning risks in distributed collaborative
learning systems, ranging from image understanding, video
analytics, to large language models (LLMs), and provide
insights for risk mitigation methods and techniques,
ensuring responsible Federated Learning.