Newsnews

TabbyML Raises $3.2 Million To Challenge GitHub Copilot

tabbyml-raises-3-2-million-to-challenge-github-copilot

A new player has entered the race to create AI assistants for coding, as TabbyML, developed by two former Google employees, secures $3.2 million in seed funding. TabbyML is an open source code generator that aims to rival GitHub Copilot, but with the added advantage of being highly customizable.

Key Takeaway

TabbyML, an open source code generator, has raised $3.2 million in funding to compete with GitHub Copilot. The customizable nature of TabbyML sets it apart from Copilot, making it more appealing for larger enterprises with proprietary code. While Copilot is powered by OpenAI and has advantages in deploying large AI models, TabbyML aims to lower the deployment barrier by recommending models trained on 1-3 billion parameters. As computing costs decrease and open source models improve, TabbyML believes it can eventually compete on the same level as Copilot.

The Rise of Customizable AI Assistants

Unlike GitHub Copilot, which is a self-hosted coding assistant, TabbyML offers a more flexible solution that can be tailored to meet the unique needs of each company. Co-founder Meng Zhang believes that customization will be a key demand in software development for all companies in the future. “There are probably more mature and complete products in the proprietary software space, but if we compare an open source solution with GitHub’s OpenAI-powered tool, there are more limitations to the latter,” Zhang explained.

According to TabbyML co-founder Lucy Gao, open source software is particularly valuable for larger enterprises. While independent developers may use open source code in their projects, engineers within enterprises often work with proprietary code that is not accessible to Copilot. Gao illustrated this point by stating that with TabbyML, she can easily quote and incorporate a colleague’s recently written code.

Addressing Limitations and Enhancing Reliability

As with any AI-based technology, code generators can sometimes be prone to bugs. However, Gao believes that tackling this challenge is relatively easy with a self-hosted solution like TabbyML. The AI model behind TabbyML continually learns and refines its suggestions based on user feedback and edits.

It’s important to note that code generators are designed to assist human programmers and not to replace them. The outcomes so far have been promising. GitHub recently released a survey showing that Copilot users accepted 30% of the suggestions generated by the coding assistant. At a Google developer event, it was revealed that 24% of software engineers experienced more than five “assistive moments” a day using Google’s AI-augmented internal code editor Cider.

The Future of AI Models in Code Generation

In the ongoing competition with GitHub Copilot, TabbyML co-founder Zhang argues that OpenAI’s advantage will diminish as other AI models become more powerful and the costs of computing power decrease over time. GitHub and OpenAI currently have the ability to deploy AI models with tens of billions of parameters through the cloud, but the serving cost of these models is higher. As a countermeasure, Copilot has implemented request batching to mitigate expenses. However, there are limitations to this strategy, as Microsoft reportedly lost more than $20 per user per month in the early part of this year with Copilot.

TabbyML takes a different approach by recommending models trained on 1-3 billion parameters, which may result in lower quality in the short term. However, Zhang believes that as the cost of computing power decreases and the quality of open source models improves, GitHub and OpenAI’s competitive edge will decrease as well.

Leave a Reply

Your email address will not be published. Required fields are marked *