Why we invested in Unify, a single API to run any AI model across all providers
The ability to unify libraries and frameworks in artificial intelligence (AI) is a crucial step towards streamlining development processes, reducing costs, and making AI more accessible and versatile.
One of the key challenges in AI development has been the fragmentation of frameworks and hardware, leading to increased costs and development time. We’re excited to invest in Unify, and are particularly compelled by the team’s approach in automatically bridging the gap between different frameworks via a single line of code.
Unify streamlines LLM testing and deployment across providers through a unified interface, standard API, single-sign-on, and dynamic routing to optimize for speed, cost, and model quality per prompt. It differs from other routers with transparent daily runtime and quality benchmarks, as its data is publicly accessible. Additionally, Unify integrates seamlessly with major LLMOps platforms like LangChain and LlamaIndex, making it valuable for large-scale LLM deployment, reducing latency, and cutting costs for businesses. By providing built-in functions for memory, vector operations, cryptography, and more, Unify makes compute more efficient and effective. With Unify, developers can seamlessly run any piece of code on any machine learning framework, backend, and hardware, all while optimizing AI models during compilation.
We are impressed by the team’s strong background in generative AI for code and large language models. CEO & Founder Daniel Lenton pursued his PhD at Imperial College of London focusing his research on solving the ML-Fragmentation problem by inventing a new AI-specific compiler. While there, he observed colleagues facing collaboration challenges due to diverse software stacks at major tech companies. Frustrated, he developed Ivy, a unified machine learning framework, which gained popularity on GitHub with 14k stars and a 20k-member Discord community. Now, with Unify, the team aims to simplify and accelerate large language model deployment, addressing the current issues of speed, complexity, and cost in AI.
Unify’s capabilities have far-reaching implications for AI development. As the demand for task-specific ML libraries grows, the need for a unified compiler becomes more urgent. Unify’s built-in AI-specific functions, including transformer and neural network layer attention methods, offer a revolutionary solution. By automatically resolving critical dependencies during compilation, Unify accelerates development, cuts costs, and reduces training times for AI models, all while ensuring optimal performance.
We’re excited to join Unify on their journey, and believe that the company’s unique approach to AI model optimization has the potential to reshape the industry, simplifying development, cutting costs, and accelerating progress.
Andy Duong is an investor at Samsung Next. Samsung Next's investment strategy is limited to its own views and does not reflect the vision or strategy of any other Samsung business unit, including, but not limited to, Samsung Electronics.