Event Recap: Latest Engineering Trends for Artificial Intelligence and Machine Learning

Industry experts share insights into how AI and Machine Learning can help you to stay one step ahead

Technology leaders are under constant pressure to develop the latest analytic innovations. However, reality dictates that the undertaking of AI and Machine Learning (ML) will require a measured balance between technology adoption, cultural change and a learned grasp of the new normal.

Alexander I. Iliev—Professor and Academic Head at SRH Berlin University of Applied Sciences—moderated this panel discussion with practitioners to explain some of the hot topics in the field, such as how to effectively productize AI and ML applications.

 

Alexander Iliev, Ph.D.

“We don't think about AI as something that we can actually use in everyday life, and we can easily solve a lot of problems with it.”

 

Meet the Panelists

Ali Rebaie, Data Anthropologist, President at Rebaie Analytics Group
Nashlie H. Sephus, Tech Evangelist at Amazon AI, Founder of TheBeanPath.org
Michael Wu, Chief AI Strategist at PROS

 

What is your relation to engineering trends for AI and Machine Learning?

Nashlie Sephus: I am a tech evangelist/former applied scientist at Amazon on the AWS AI team that focuses on fairness and mitigating biases in those technologies.

And, of course, I'm a consumer of the technology. I think a lot of times when products don't work for us, we just assume, “Okay, that's just the way it is.” But it shouldn't be that way. That is something that I focus on as my passion and advocacy behind Machine Learning, as well as AI and those technologies.

Ali Rebaie: I like to call myself a developmental anthropologist because I study social trends. I also research our ancestors' cognitive and cultural evolution.

I believe that's very important because it will help us stage emotional human experiences and human-centric AI. For AI to achieve what it has promised, it has to be combined with the social sciences and arts.

Michael Wu: I have a lot of experience working with large enterprises that are adopting AI.

The pandemic has created an acceleration toward this digital world, which is contactless, safer and more efficient. As a result, I see that a lot of companies are struggling to adopt this technology, even though there's a lot of benefits.

Even so, the adoption of this technology, especially in the business world, has been slower. Now, consumers have no problem adopting these types of technology. And businesses essentially are forced to follow.

But it's very often the case that for consumers, when the AI doesn't work, it creates some inconvenience. The cost of an error, of a wrong decision in the consumer world is probably small, something of an inconvenience, a loss of a few minutes or a little bit of time. But when you actually apply these types of technology in the business world, a wrong decision could be millions of dollars in loss; could be irreparable reputation loss; and probably lost customers, trust and brand value.

These are very high-stakes decisions, and so businesses are very often reluctant to adopt AI for that reason. It's a fear of something gone wrong. I help a lot of companies manage a team of data scientists to build these systems and help them launch and deploy successfully. 

 

Nashlie, why did you choose to talk about different types of biases in data in your lecture?

When consumers use technologies that deal with AI, a lot of times they're not sure how the technology works. They assume, “This is a magical thing that we're just going to take its word. Whatever it says, that's what we're going to go with.”

However, understanding that the data that you put into the algorithm very much determines how robust, accurate and fair these AI models are, and how well it works on various groups, backgrounds and intersections of those groups.

Any AI or Machine Learning tool is really dependent—especially if you're training models and you're learning from prior data—on the data. I always say, my math teacher used to say “garbage in, garbage out.”

So when data fails us, people are concerned about their rights, especially when it comes to mission-critical use cases where you have law enforcement involved or the military using AI tools and technologies.

When it comes to health care and things like law enforcement, you want to make sure that the data is well balanced, well representative, and speaks to and for the people who are going to be using that technology. We want to make sure you're not marginalizing any groups.

All these things should be considered when you're working with humans. And so we want the data to not fail us. Because when the data fails us, then right off the bat you have a problem. And so if we can continue to get this data problem corrected, then we can focus on some of the other issues that may exist. But it really all starts with that.

Alexander: In your lecture you say that an algorithm may be both correct and biased at the same time. Would you please clarify what is meant by this?

Nashlie: Let's say you have an application that you're building for a particular region to be used on a particular group of people. And you already know who those people are.

You want to tweak your algorithm to have the highest accuracy rate for this one group of people because you already know what the application is. In that case, you're intentionally creating bias in the system.

You're intentionally wanting to maximize the accuracy, minimize the error for that particular group. And that can be okay if that's your intention. What we often find is that those are not our intentions, and it ends up happening anyway.

 

Michael, What made you select the topic of industrial AI adoption, maturity and challenges along the way?

This is a very important topic because just having consumers adopting this technology is good, but obviously it's not good enough. To move forward as a society and allow our innovation and economic growth to progress to the next level, businesses need to be able to adopt this technology and be able to trust it and use it effectively and fairly.

In a competitive business world, it is sometimes very tempting to use this technology in an unfair way to gain more advantage for yourself. But there needs to be measures put in place to ensure that doesn't happen.

And where I find this is interesting—specifically relevant to challenges for trained data scientists and engineers—is that very often these are not really technical problems. Very often, they involve human psychology, design, legal. A lot of these are challenges that we are not trained in our background to handle.
AI certainly has technical aspects to it, but it also has a lot of other aspects around it that are actually very important for us to learn.

 

Ali, what was the main motivation for you to choose the topic of building human-centric Machine Learning solutions?

Let's go back to how we used to experience technology:

We have two hunting technologies—the atlatl and the arrow. Cognitive, conceptual, technological and even behavioral flexibility were amplified when we adopted the arrow hunting technology compared to atlatl hunting.

The invention of those new technologies can affect the way a society is behaving.

Now, imagine the profound changes that AI could bring in this transition. Both the atlatl and the arrow are static tools. Now we are talking about a dynamic tool.

An algorithm learns by using, and the way we experience algorithms is now shaped by the uncertainty of how these data patterns and the parameters of the Machine Learning algorithms are converged in a model.

So, for example, if we take Facebook, I experience the news feed, the UI, but I don't know what's happening behind the scenes. So the algorithm we haven't seen for the use case keeps on learning, keeps on changing without us knowing.

We always come across modal uncertainty or epistemic uncertainty and data uncertainty or aleatoric uncertainty. But what we are also talking about culture uncertainty. Bias can happen when you fail to understand the culture you are deploying the Machine Learning model for. I believe these are important topics where engineers play an important role, and especially when they are developing these models.

 

Why is phenomenology important for AI and what is the connection?

Ali: When we talk about phenomenology, it's about experience, understanding the context, understanding the culture.

In my course, I go over different types of bias as expressed by each stakeholder—from raw data to EDA, to feature engineering to model building, and then consumption.

So imagine a recruiting system where a candidate is required to have a master's degree. I can fix for you the bias of favoring short people over tall people. But it cannot fix ground truth. It's tough for the algorithm to access that ground truth.

It's not a data problem. It's not a Machine Learning problem. How do you solve the truth problem? It's by collaboration among different engineers and understanding the different backgrounds, different societies.

This is where we can understand these common issues that are faced by engineers. And it's about phenomenologists, about experience and culture.

 

Why is it that many companies are still confused about what a true AI system is?

Michael: Every company wants to say that they're doing AI. Even if you're doing something simple such as data analytics, you would try to market yourself as doing AI so that you get higher valuation, and more interest and conversation about you.

That is the driver that created this confusion. AI is actually a topic that has evolved over many years. Back when you had simple things such as optical character recognition, people thought that's AI because you have a system that can read prints and handwriting.

So that sounds like something that only an intelligent machine can do, right? So they call it AI.

Ali: When it comes to AI, since the 1950s, a knowledge-engineer fed an expert system that learned with the preset tools or symbols.

If you apply that AI now, you cannot easily define the rules for a self-driving car to detect all of the different pedestrians in my face. It won't account for every possible case.

Michael discussed that in his lecture. He said that AI is about learning and automation. So rather than trying to encode the high-level knowledge and the logical reasoning in expert systems, machine learning now employs a kind of bottom-up approach: Algorithms discern the relationship by repeating the tasks and classifying, for example, visual objects and images, or transcribing recorded speech into text.

I believe AI could be able to understand or draw from experiences and expectations. Because until now, AI could only analyze what it sees. And it did not have a broad non-conceptual understanding of the ideas and the phenomena around us.

 

What does AI maturity mean and what can we expect when you go into the depth of the AI maturity curve?

Michael: The AI maturity curve is developed to help companies and organizations understand what's coming. Because there is a natural adoption with maturity.

The first step is what we call digital transformation. It's digitization. Collect data. The next step would be to exchange this data for some kind of automation. You build a model.

But then these automated tasks probably won't do a perfect job. So at the next phase, you have to learn from humans and essentially refine this model.

So this maturity curve is developed in a way that's vendor- and business agnostic. It's a natural progression of how AI adoption would occur in the organization. Finally, you involve autonomous AI running robotics. And then eventually, we may even be able to support an AI augmented economy where a lot of our repetitive work may be automated.

 

How can an AI engineer become an experienced artist who can design better human-centered AI products?

Ali: This is a question about trust. Engineers’ roles should be shifted from just focusing on the accuracy of the models to thinking more about how they can suspend disbelief for the sake of creating unforgettable experiences and unforgettable spectacles for their users. It's that kind of mindshift that is important for engineers whenever they are working on Machine Learning applications.

 

Why do topics such as ethics and equality remain overlooked?

Michael: It's very difficult to define an objective function for ethics. When you talk about Machine Learning or an AI system, typically there's some kind of objective function that we optimize.

Do you optimize it to find the true optimal or to find the average optimal? We always focus on finding the one model that minimizes error. We focus on these hard metrics. Sometimes we overlook the fact that there's a human aspect to it.

But there's actually a lot of these affective computing empathic systems. And they try to add terms into the objective function to try to quantify, “What is a good experience? What does ethics mean?“

But it's often very difficult to find a term that essentially meets all the criteria of being ethical. Legal is probably easier. So if you have a system that has encoded all the laws in our society, any time your solution goes outside of the boundary of the laws, then your cause function becomes infinite.

And it's very often contextual and use case dependent. These are challenges that I don't think we have a solution to yet. I've dealt with this from time to time. And very often, I have to deal with it case by case.

 

What advice can you give to an AI engineer who implements technologies to stay ethical, to be unbiased?

Nashlie: Ethics and biases and fairness—they all have different meanings, and it really depends on what the use case is and the application.

I will say that in terms of fairness and having unbiased AI applications, you want to make sure that, in general, you provide a lot of different perspectives at the concept stage of the product.

Hopefully, you have a very diverse engineering team across gender, age, ethnicity, backgrounds, regions, et cetera. And if you don't have these people, leverage focus groups.

As engineers, scientists and developers, we have to be very conscious of the dynamics that are going on in the world or in whatever region it is that we're working in.

We experienced a lot last year in terms of social justice and racial equity. And when it comes to these things, especially in terms of law enforcement and using technologies that have humans in the loop or have human applications, you want to make sure that we know that this particular, for example, race dynamic is very sensitive.

You'll see some engineering and tech teams include sociologists to get that extra input.

 

What advice would you give to someone who is in engineering and building an AI system, keeping in mind the challenges business usually face when adopting AI technologies?

Michael: Understanding the business use cases is crucially important. Certainly, having an inclusive team and diversity of opinions are very important.

In the most practical scenarios, it's often very challenging to include all diversity. Even in the focus group, sometimes you have selection biases. You try to mitigate all these biases, but one thing that we must not forget is that even though we try to minimize this bias, sometimes we miss something.

If you take this system and apply it to a different society where you have different culture, different norms, then you need to evaluate it and constantly monitor and refine.

To be a truly intelligent system, it needs to exhibit this character of being able to adapt.
So whenever we take these systems out of their context in which it was trained and built, we need to constantly evaluate and retrospectively include more data, more information and constantly increasing diversity.

 

Audience question: One of the AI challenges is the trust deficit. Non-AI consumers and even AI experts themselves are not clear how inputs lead to fantastic output that AI normally resolves to. How do you handle this trust deficit challenge?

Michael: A lot of people think that these AIs are black boxes that they could not understand.

And that is starting to change. There are techniques that allow you to explain AI, explain this black box. There's obviously the simple approach of using an interpretable model.

But then whenever you do use these black-box models such as deep neural networks, random forest, and all these models that have many, many more orders of magnitude, more parameters than data points, then you do have to explain them retrospectively.

And there is a whole discipline around explainable AI that's going on right now. Very often in the business world, it's not good enough to say that you achieve a 90% prediction accuracy, right? You have to tell why.

For example, sometimes, it's required by law. If you are trying to have a system that determines whether to lend some of the money for a loan, if you reject them, you need to tell them why. You can't just say how the AI says you don't score high enough, so too bad, right? [LAUGHTER]

People trust things that they can explain and understand. But you could design the system in such a way so that people could build trust easier with the system.

 

Audience question: Technology revolution is disrupting an increasing number of industries at a rapid pace. This will undoubtedly result in forced workforce shift or evolution. Is the U.S. workforce prepared for disruption that will come from AI and a workforce shift?

Michael: At the current stage, there's a lot of work that needs to be done—a lot of reskilling and retraining.

 


Are you ready to learn from these industry experts in how you can productize AI and Machine Learning applications for engineering?

Then start your journey today with our online three-course series!