Over the past year, the field of artificial intelligence has witnessed the emergence of generalized language models (LLMs) like ChatGPT, which seemed to overshadow the traditional approach of task-based AI. However, despite the rise of LLMs, task-based models continue to play a vital role in the enterprise, solving real-world problems and proving their worth in various industries.
Key Takeaway
In the era of large language models, task-based AI models continue to hold their ground in the enterprise. Task models offer advantages such as specialization, cost efficiency, and performance optimization, making them a valuable tool for solving specific business challenges. Despite the rise of LLMs, the role of data scientists remains critical in critically analyzing and understanding the relationship between AI and data within organizations.
Task Models: The Foundation of Enterprise AI
Task-based models have long served as the cornerstone of AI in the enterprise, even before LLMs came into play. Werner Vogels, Amazon CTO, referred to this approach as “good old-fashioned AI” in a recent keynote speech. According to Vogels, task models have consistently demonstrated their ability to address a wide range of business challenges.
Atul Deo, general manager of Amazon Bedrock, a product designed to integrate with large language models via APIs, shares a similar perspective. Deo believes that task models are not on the verge of extinction; instead, they have become another valuable tool in the AI toolkit.
Prior to the advent of large language models, task-specific models were trained from scratch for specific purposes. The key difference between task models and LLMs lies in their boundaries. While task models are tailored to handle specific tasks, LLMs possess the flexibility to comprehend and tackle problems beyond the confines of their initial training.
The Relevance of Task Models
Jon Turow, a partner at investment firm Madrona and former AWS employee, acknowledges the emerging capabilities of large language models, such as reasoning and out-of-domain robustness. These enhanced features allow models to push beyond their initial scope. However, the extent of these capabilities and their applicability still sparks debates within the industry.
Turow strongly advocates for the continued use of task models, emphasizing their advantages. Task-specific models can be smaller, faster, cheaper, and, in certain cases, more performant due to their tailored nature. While the allure of an all-purpose model is undeniable, deploying multiple task models within an organization can prove to be more efficient in terms of cost, performance, and specialization.
Amazon recognizes the value of task models alongside the rise of LLMs. SageMaker, Amazon’s machine learning operations platform, remains a critical product with a significant user base. With tens of thousands of customers building millions of models, Amazon has enhanced SageMaker to accommodate the management of large language models.
The Role of Data Scientists
Previously, when task models dominated the AI landscape, companies relied on teams of data scientists to develop these models. With the advent of LLMs, many tools now target developers instead. However, data scientists still play a crucial role even in organizations focusing on LLMs.
Data scientists continue to bring invaluable expertise in critically evaluating data, a responsibility that is expanding rather than diminishing. Regardless of the type of model being used, data scientists help companies understand the intricate relationship between AI and data, particularly within large enterprises.
In conclusion, both task-based AI and large language models will coexist for the foreseeable future. While LLMs offer unparalleled flexibility and broader applicability, task models remain relevant due to their specialization, cost efficiency, and performance advantages. The role of data scientists in analyzing and assessing data in the age of AI is indispensable, regardless of the model being employed.