IBM to Host Meta’s Llama 2-Chat Model in watsonx.ai Studio, Offering Early Access to Partners. This expands IBM’s collaboration with Meta in AI open innovation, leveraging projects like PyTorch and Presto for watsonx.data.
This will also support IBM’s strategy of offering both third-party and its own AI models. Currently in watsonx.ai, AI builders can leverage models from IBM and the Hugging Face community, which are pre-trained to support a range of Natural Language Processing (NLP) tasks including question answering, content generation and summarization, text classification and extraction.
IBM is committed to keeping trust and security principles at the forefront as it continues to roll out its generative AI capabilities. For instance, when users run the Llama 2 model through the prompt lab in watsonx.ai, they can toggle on the AI guardrails function to help automatically remove harmful language from the input prompt text as well as the output generated by the model. Meta also provides an account of their fine-tuning methodology used in their large language models.
Furthermore, IBM Consulting has the expertise of 21,000 data, AI and automation consultants in addition to its Center of Excellence for Generative AI comprised of more than 1,000 consultants with specialized generative AI expertise. These experts can work with clients to help tune and operationalize models for targeted use cases aligned to their specific business requirements.
IBM, like Meta, is a supporter of open innovation. There is value in engaging a robust and diverse community of AI builders and researchers to test, share feedback and collaborate on these technologies to drive further innovation. We are excited to see what these innovators will build with Llama 2 and other models on the watsonx platform.