Get Premium

Google's partnership with Hugging Face allows developers on the AI model repository to access Google's cloud resources, including GPUs and TPUs, without the need for a subscription, enhancing open-source AI Tech and AI News at Tool Battles

*This post may contain affiliate links. If you click on a product link, we may receive a commission. We only recommend products or services that we personally use or believe will add value to our audience*

Google’s Collaboration with Hugging Face Unlocks ‘Supercomputer’ Power for Open-Source AI Development

TL;DR: Google's partnership with Hugging Face allows developers on the AI model repository to access Google's cloud resources, including GPUs and TPUs, without the need for a subscription, enhancing open-source AI

Google Cloud’s recent collaboration with Hugging Face, a leading AI model repository, marks a significant advancement in open-source AI development. The partnership allows developers on Hugging Face’s platform to utilize Google’s tensor processing units (TPU) and GPU supercomputers without requiring a Google Cloud subscription. Hugging Face, valued at $4.5 billion, hosts over 350,000 models, including Meta’s Llama 2 and Stability AI’s Stable Diffusion, making it a hub for AI enthusiasts. This strategic move aims to democratize machine learning, providing cost-effective access to Google’s powerful computing resources, including the coveted Nvidia H100s.

This collaboration streamlines the AI development process, offering Hugging Face users the ability to harness Google Cloud’s Vertex AI and Kubernetes engine, set to be available in the first half of 2024. Google Cloud emphasizes its support for the open-source AI ecosystem, fostering accessibility to AI research, open-source libraries, and cloud features. Although some of Google’s large language models are not currently on Hugging Face, the partnership is expected to broaden access to AI models and applications.

Hugging Face’s mission revolves around enabling companies to build their own AI by leveraging open models and technologies. The collaboration with Google encompasses open science, open source, cloud, and hardware domains, promising enhanced accessibility to the latest AI innovations. The partnership extends to Google Cloud customers, offering new experiences such as improved training and deployment of Hugging Face models within Google Kubernetes Engine and Vertex AI. The unique hardware capabilities of Google Cloud, including TPU instances and powerful GPUs, will be harnessed to accelerate AI applications.

Millions of researchers, data scientists, developers, and AI hobbyists who rely on the Hugging Face Hub will benefit from the collaboration. The joint efforts in open science, open source, and Google Cloud will introduce new experiences throughout 2024. Users can anticipate making the most recent AI models easily deployable for production on Google Cloud with Inference Endpoints. This collaboration aligns with Google Cloud’s vision for making generative AI more accessible and impactful for developers.

New Report

Close