Interview News

Natural Language Processing is the force behind machine intelligence in many real-world applications

amith

Dr. Amith Singhee, Director, IBM Research India and CTO IBM India/South Asia 

As Director, Dr Amith Singhee sets the strategy for the Research division in IBM India for driving forward-looking innovation that fuels growth for IBM’s products and services. This includes foundational research in the areas of Hybrid Cloud, AI, Quantum Computing, Cybersecurity, and Sustainability. As CTO, he engages with the regional ecosystem in academia and industry to represent IBM’s technology vision. 

In an exclusive discussion with Enterprise IT World, Dr Singhee provides insights into the developments at IBM Research in Artificial Intelligence and Natural Language Processing. Here are some excerpts.  

“Business areas where NLP already has had a big impact are customer care; product search and business intelligence. NLP has made a huge impact in back-office services which benefit from document processing.”

Dr. Amith Singhee, Director, IBM Research India and CTO IBM India/South Asia 

How is IBM leading the way in advancing breakthroughs in this field of Natural Language Processing (NLP)? 

In recent decades, the field of artificial intelligence (AI) has experienced tremendous scientific advancements — from vast improvements in processing power and computational efficiency to the emergence of more flexible and reusable AI systems that succeed in a broader range of application and domain areas. As NLP enables computers to process human language in the form of text or voice data, our endeavour has been that NLP models must deliver high accuracy, are explainable, and trustworthy. Our advancements in NLP are also focused on enabling users to easily and effectively customize NLP for their needs without special training or complex layers of guidance and information to learn.  

 IBM has multiple products offering a multitude of NLP capabilities, ranging from fundamental ones like Named Entity Recognition (NER), dependency parser, and part-of-speech tagger across languages via Watson NLP, to building efficient chatbots via Watson Assistant or information search from documents via Watson discovery. Very recently, Foundational Models (FM) like GPT-3, Instruct-GPT, Chat-GPT, etc. are the latest advancements in NLP, which as the name suggests, means having a model which can be extended for multiple downstream tasks with very less data.  

However, what’s important for its practical usage and adoption by the industry is how to use these FMs with enterprise data, keeping in mind, extending them with their own curated data for commercial purposes has legal blocks. This is where IBM leads the way in developing infrastructure that enables clients to build and deploy their own FM, and also provides a technology stack to use and extend the FM to multiple downstream applications. IBM is engaging with clients globally and making significant contributions to advance open source technology in this space, apart from advancing its own products and services, as a holistic approach to develop this infrastructure and stack. This is going to be a significant enabler in the coming days for businesses across industries to realize the full benefits of AI by leveraging more general AI technology in the form of Foundation Models. 

“Our advancements in NLP are focused on enabling users to easily and effectively customize NLP for their needs without special training or complex layers of guidance and information to learn.”  

How is IBM addressing the complex NLP challenges for AI? 

For a decade or so now, we have seen state-of-the-art deep learning models sizes continue to grow and perform incredible feats. But, building and maintaining very large and complex models for each task we want to perform is unsustainable and expensive. To address this challenge, we are adapting these complex powerful models to support many subsequent use cases, each of which is much faster and far less expensive to support. This new class of models is known in the industry as foundation models (FM) and we believe it can deliver better performance at a lower cost compared to building each model, one at a time, from scratch. For instance, in its first seven years, IBM Watson covered 12 languages, however, using foundation models it jumped to cover 25 languages in about a year.  

 Another critical problem of generative models which limits their application in industry is  

hallucination: they tend to hallucinate and provide factually incorrect information and generating HAP (Hate, Abuse, and Profane) content. At IBM, we are actively working on inventing new AI paradigms to control and possibly negate such hallucinations and intelligent debiasing techniques to detect and handle HAP generation. All of these are active research challenges that have a direct impact on its industrial application. That’s why IBM is deeply engaged with ecosystems globally, and in India with the industry and government, with efforts to bring awareness, best practices, and tools around trustworthy and responsible AI.  

Thirdly, collecting, labelling, and auditing real data is expensive, and no matter how much you collect they never fully capture the complexity of the physical world. In addition to that, healthcare records, financial data, and content on the web are all increasingly restricted due to privacy and copyright protection. Real data also comes with baked-in biases that can reinforce or amplify existing inequities. That’s why we are betting on synthetic data to fill in gaps where real data are in short supply. Synthetic data are computer-generated examples that can speed up the training of AI models, protect sensitive data, improve accuracy, or find and mitigate bias and security weaknesses. 

 
Lastly, while AI systems are quickly evolving to do more across business and society, challenges remain to create systems that can demonstrate agility, flexibility, and a real understanding of the topics and problems they’re asked to solve and exhibit increased common sense. That’s why we are exploring new ways that can help AI exhibit human-like common sense, infer mental states, predict future actions, and even work with human partners.  

How enterprise NLP allows businesses to “learn more with less”? 

Enterprises need AI that understands the unique language of their industry and business, and easily integrate with an enterprise’s knowledge bank; extract insights from complex documents and data – without depending on sophisticated data science skills whiling being able to work in many languages.  

IBM is driving a multi-faceted research agenda in the area of enterprise NLP that solves the mentioned challenges and allows our clients to “learn more with less”. This idea is defined by three overarching objectives of NLP systems: adaptable systems to understand and process individualized needs; deliver solutions faster and enable trust and ease of use.  

A key metric for “learning more with less” is about how to optimize the effort needed to bring NLP systems to a level of performance and trustworthiness to be useful in an enterprise setting and subsequently how quickly one can bring business value for internal and external users from it.  

For a long time, a proven strategy has been to start with a model pre-trained on existing public domain data, where the pre-training does most of the heavy lifting of understanding the language. Such pre-trained models are then adjusted or as we call it, finetuned for enterprise-specific tasks, assuming such adjustment does not need that huge amount of data.  

Recent innovations suggest that large language models can be customized with much less data  if it is shown strategically chosen illustrated examples, a technique also known as in-context learning with prompt tuning. Of course, all of these can further benefit from known NLP paradigms such as multi-task training, and adversarial learning to alleviate the need for task or domain-specific enterprise data. To that end, related NLP tasks like Knowledge Graph extraction, Reasoning also play a vital role in conjunction with neural methods to make the most from less data. 

Another aspect of “learning with less” using NLP involves the drastic reduction in time to take research innovations to end users – be it customers or own workforce. This is an exclusive advantage of NLP because it can allow any end user to directly interact with technology through NL utterances. From the end-user perspective, this becomes visible when as soon as an enterprise NLP system comes online, users can access content and knowledge much faster through natural language, which eventually makes the enterprise more agile – its employees can learn fast and learn more with less effort and less time. 

Which business segments have widely adopted NLP? 

NLP is the driving force behind machine intelligence in many modern real-world applications from spam detection, virtual agents, text summarization, insights, hiring, sentiment analysis, and so on. For example, machine translation, creating chatbots, and multilingual question answering are a few examples of the most widely used NLP technologies that let you expand your business across geographies as well as different user groups. Therefore, the business segments dealing with human-computer interaction rely on diverse NLP capabilities in the AI journey to drive efficiencies in the supply chain, advertising efforts, or improve customer or employee engagement and thus are usually the first ones to adopt NLP advances organically.  

According to the 2022 IBM Morning Consult AI Adoption report, over 40% of IT professionals in India report that their company is using or considering using NLP solutions for business development (49%), security (45%), and customer care (44%). The report also states that a majority of IT professionals in India at organizations currently deploying AI say their company is using AI to improve customer service agent productivity (54%) and streamline how customers or employees find or resolve frequently asked questions (51%).  

There are also examples of more niche technologies that are specific to certain correlated segments. For example, no/low code program generation is mainly adopted by the software development segment, Natural Language Understanding of task descriptions is needed in the automation segment. 

 Which business areas/operations can NLP have a disruptive impact? 

Some of the business areas where NLP already has had a big impact are – Customer Care; Product search and business intelligence to name a few. Another sector where NLP has made a huge impact is back-office services which benefit from document processing.  

Some examples include invoice processing, billing, and quote to cash like automation services. In fact, with the new advances in large language models, we have seen a tremendous promise of storing and recalling generic world knowledge or with some curated training, from customized corpus knowledge. Such systems can be foreseen as the much-needed highly intelligent AI assistant which will strengthen all human-in-the-loop applications in solving real-world information search problems across industries. 

Foundation Models and generative AI, have the potential to dramatically accelerate AI adoption and disrupt AI applications. Foundation models will make it much easier for businesses to dive in, and the highly accurate, efficient AI-driven automation they enable will mean that far more companies will be able to deploy AI in a wider range of mission-critical situations. Our goal is to bring the power of foundation models to every enterprise in a frictionless hybrid-cloud environment. 

And generative modeling willdramatically accelerate how we come up with new ideas around the discovery of things like new materials, drugs, climate solutions, and more.  

Related posts

Team Computers and Apple Collaborate to Empower GCCs with Smarter Workplace Solutions

enterpriseitworld

Ajay Ajmera Joins Group CIO at Rockman Industries 

enterpriseitworld

Versa Envisions Securing Anywhere, Anytime Access with VersaONE Universal SASE

enterpriseitworld
x