Andrew Ng: Even with generative AI buzz, supervised learning will create ‘more value’ in short term Andrew Ng: Even with generative AI buzz, supervised learning will create ‘more value’ in short term
Check out all the on-demand sessions from the Intelligent Security Summit here. One rarely gets to engage in a conversation with an individual like... Andrew Ng: Even with generative AI buzz, supervised learning will create ‘more value’ in short term

Check out all the on-demand sessions from the Intelligent Security Summit here.

One rarely gets to engage in a conversation with an individual like Andrew Ng, who has left an indelible impact as an educator, researcher, innovator and leader in the artificial intelligence and technology realms. Fortunately, I recently had the privilege of doing so. Our article detailing the launch of Landing AI’s cloud-based computer vision solution, LandingLens, provides a glimpse of my interaction with Ng, Landing AI’s founder and CEO.

Today, we go deeper into this trailblazing tech leader’s thoughts.

Among the most prominent figures in AI, Andrew Ng is also the founder of DeepLearning.AI, co-chairman and cofounder of Coursera, and adjunct professor at Stanford University. In addition, he was chief scientist at Baidu and a founder of the Google Brain Project.

Our encounter took place at a time in AI’s evolution marked by both hope and controversy. Ng discussed the suddenly boiling generative AI war, the technology’s future prospects, his perspective on how to efficiently train AI/ML models, and the optimal approach for implementing AI.


Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

This interview has been edited for clarity and brevity.

Momentum on the rise for both generative AI and supervised learning

VentureBeat: Over the past year, generative AI models like ChatGPT/GPT-3 and DALL-E 2 have made headlines for their image and text generation prowess. What do you think is the next step in the evolution of generative AI? 

Andrew Ng: I believe generative AI is very similar to supervised learning, and a general-purpose technology. I remember 10 years ago with the rise of deep learning, people would instinctively say things like deep learning would transform a particular industry or business, and they were often right. But even then, a lot of the work was figuring out exactly which use case deep learning would be applicable to transform. 

So, we’re in a very early phase of figuring out the specific use cases where generative AI makes sense and will transform different businesses.

Also, even though there is currently a lot of buzz around generative AI, there’s still tremendous momentum behind technologies such as supervised learning, especially since the correct labeling of data is so valuable. Such a rising momentum tells me that in the next couple of years, supervised learning will create more value than generative AI.

Due to generative AI’s annual rate of growth, in a few years, it will become one more tool to be added to the portfolio of tools AI developers have, which is very exciting. 

VB: How does Landing AI view opportunities represented by generative AI?

Ng: Landing AI is currently focused on helping our users build custom computer vision systems. We do have internal prototypes exploring use cases for generative AI, but nothing to announce yet. A lot of our tool announcements through Landing AI are focused on helping users inculcate supervised learning and to democratize access for the creation of supervised learning algorithms. We do have some ideas around generative AI, but nothing to announce yet.

Next-gen experimentation

VB: What are a few future and existing generative AI applications that excite you, if any? After images, videos and text, is there anything else that comes next for generative AI?

Ng: I wish I could make a very confident prediction, but I think the emergence of such technologies has caused a lot of individuals, businesses and also investors to pour a lot of resources into experimenting with next-gen technologies for different use cases. The sheer amount of experimentation is exciting, it means that very soon we will be seeing a lot of valuable use cases. But it’s still a bit early to predict what the most valuable use cases will turn out to be. 

I’m seeing a lot of startups implementing use cases around text, and either summarizing or answering questions around it. I see tons of content companies, including publishers, signed into experiments where they are trying to answer questions about their content.

Even investors are still figuring out the domain, so exploring further about the consolidation, and identifying where the roads are, will be an interesting process as the industry figures out where and what the most defensible businesses are.

I am surprised by how many startups are experimenting with this one thing. Not every startup will succeed, but the learnings and insights from lots of people figuring it out will be valuable.

VB: Ethical considerations have been at the forefront of generative AI conversations, given issues we’re seeing in ChatGPT. Is there any standard set of guidelines for CEOs and CTOs to keep in mind as they start thinking about implementing such technology?

Ng: The generative AI industry is so young that many companies are still figuring out the best practices for implementing this technology in a responsible way. The ethical questions, and concerns about bias and generating problematic speech, really need to be taken very seriously. We should also be clear-eyed about the good and the innovation that this is creating, while simultaneously being clear-eyed about the possible harm. 

The problematic conversations that Bing’s AI has had are now being highly debated, and while there’s no excuse for even a single problematic conversation, I’m really curious about what percentage of all conversations can actually go off the rails. So it’s important to record statistics on the percentage of good and problematic responses we are observing, as it lets us better understand the actual status of the technology and where to take it from here.

Image Source: Landing AI

Addressing roadblocks and concerns around AI

VB: One of the biggest concerns around AI is the possibility of it replacing human jobs. How can we ensure that we use AI ethically to complement human labor instead of replacing it?

Ng: It’d be a mistake to ignore or to not embrace emerging technologies. For example, in the near future artists that use AI will replace artists that don’t use AI. The total market for artwork may even increase because of generative AI, lowering the costs of the creation of artwork.

But fairness is an important concern, which is much bigger than generative AI. Generative AI is automation on steroids, and if livelihoods are tremendously disrupted, even though the technology is creating revenue, business leaders as well as the government have an important role to play in regulating technologies.

VB: One of the biggest criticisms of AI/DL models is that they are often trained on massive datasets that may not represent the diversity of human experiences and perspectives. What steps can we take to ensure that our models are inclusive and representative, and how can we overcome the limitations of current training data?

Ng: The problem of biased data leading to biased algorithms is now being widely discussed and understood in the AI community. So every research paper you read now or the ones published earlier, it’s clear that the different groups building these systems take representativeness and cleanliness data very seriously, and know that the models are far from perfect. 

Machine learning engineers who work on the development of these next-gen systems have now become more aware of the problems and are putting tremendous effort into collecting more representative and less biased data. So we should keep on supporting this work and never rest until we eliminate these problems. I’m very encouraged by the progress that continues to be made even if the systems are far from perfect.

Even people are biased, so if we can manage to create an AI system that is much less biased than a typical person, even if we’ve not yet managed to limit all the bias, that system can do a lot of good in the world.

Getting real

VB: Are there any methods to ensure that we capture what’s real while we are collecting data?

Ng: There isn’t a silver bullet. Looking at the history of the efforts from multiple organizations to build these large language model systems, I observe that the techniques for cleaning up data have been complex and multifaceted. In fact, when I talk about data-centric AI, many people think that the technique only works for problems with small datasets. But such techniques are equally important for applications and training of large language models or foundation models. 

Over the years, we’ve been getting better at cleaning up problematic datasets, even though we’re still far from perfect and it’s not a time to rest on our laurels, but the progress is being made.

VB: As someone who has been heavily involved in developing AI and machine learning architectures, what advice would you give to a non-AI-centric company looking to incorporate AI? What should be the next steps to get started, both in understanding how to apply AI and where to start applying it? What are a few key considerations for developing a concrete AI roadmap?

Ng: My number one piece of advice is to start small. So rather than worrying about an AI roadmap, it’s more important to jump in and try to get things working, because the learnings from building the first one or a handful of use cases will create a foundation for eventually creating an AI roadmap. 

In fact, it was part of this realization that made us design Landing Lens, to make it easy for people to get started. Because if someone’s thinking of building a computer vision application, maybe they aren’t even sure how much budget to allocate. We encourage people to get started for free and try to get something to work and whether that initial attempt works well or not. Those learnings from trying to get into work will be very valuable and will give a foundation for deciding the next few steps for AI in the company. 

I see many businesses take months to decide whether or not to make a modest investment in AI, and that’s a mistake as well. So it’s important to get started and figure it out by trying, rather than only thinking about [it], with actual data and observing whether it’s working for you.

VB: Some experts argue that deep learning may be reaching its limits and that new approaches such as neuromorphic computing or quantum computing may be needed to continue advancing AI. What is your view on this issue? 

Ng:  I disagree. Deep learning is far from reaching its limits. I’m sure that it will reach its limits someday, but right now we’re far from it.

The sheer amount of innovative development of use cases in deep learning is tremendous. I’m very confident that for the next few years, deep learning will continue its tremendous momentum.
Not to say that other approaches won’t also be valuable, but between deep learning and quantum computing, I expect much more progress in deep learning for the next handful of years.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Source link