InstructLab: Advancing generative AI through open source
Introducing InstructLab, an open source project for enhancing large language models (LLMs) used in generative AI applications through a community approach.
Introducing InstructLab, an open source project for enhancing large language models (LLMs) used in generative AI applications through a community approach.
Learn about Konveyor AI, an open souce tool that uses generative AI to shorten the time and cost of application modernization at scale.
The AI Lab Recipes repository offers recipes for building and running containerized AI and LLM applications to help developers move quickly from prototype to production.
Explore the advantages of Podman AI Lab, which lets developers easily bring AI into their applications without depending on infrastructure beyond a laptop.
Learn how to build a containerized bootable operating system to run AI models using image mode for Red Hat Enterprise Linux, then deploy a custom image.
Learn how to deploy a trained AI model onto MicroShift, Red Hat’s lightweight Kubernetes distribution optimized for edge computing.
Accurately labeled data is crucial for training AI models. Learn how to prepare and label a custom dataset using Label Studio in this tutorial.
Learn how to configure Red Hat OpenShift AI to train a YOLO model using an already provided animal dataset.
Red Hat Enterprise Linux (RHEL) 9.4 is now generally available (GA). Learn about the latest enhancements that improve the developer experience.
Learn how to install the Red Hat OpenShift AI operator and its components in this tutorial, then configure the storage setup and GPU enablement.
Learn how to deploy single node OpenShift on a physical bare metal node using the OpenShift Assisted Installer to simpify the OpenShift cluster setup process.
Learn how to create a Red Hat OpenShift AI environment, then walk through data labeling and information extraction using the Snorkel open source Python library.
Integrate generative AI in your applications with Podman AI Lab, an open source extension for working with large language models in a local environment.
Discover the benefits of KServe, a highly scalable machine learning deployment tool for Kubernetes.
VMware Cloud Foundation 5.1 now supports Red Hat OpenShift Container Platform 4.13 and NVIDIA AI Enterprise, offering automated, consistent infrastructure and more.
Learn how Intel Graphics Processing Units (GPUs) can enhance the performance of machine learning tasks and pave the way for efficient model serving.
Learn how to create a Java application that uses AI and large-language models (LLMs) by integrating the LangChain4j library and Red Hat build of Quarkus.
Discover how to integrate cutting-edge OpenShift AI capabilities into your Java applications using the OpenShift AI integration with Quarkus.
MLOps with Kubeflow Pipelines can improve collaboration between data scientists and machine learning engineers, ensuring consistency and reliability at every stage of the development workflow.
Discover how to use machine learning techniques to analyze context, semantics, and relationships between words and phrases indexed in Elasticsearch.
Explore features that enhance automation productivity for developers in Ansible Lightspeed with IBM watsonx Code Assistant, now generally available.
Learn how to communicate with OpenAI ChatGPT from a Quarkus application using the ChatGPT API in this demo.
DevNation Day LATAM
GPT4All is an open source tool that lets you deploy large language models locally without a GPU. Learn how to integrate GPT4All into a Quarkus application.