FAQ

Frequently Asked Questions (FAQ): If you have a question about Instill products, you're in the right place.

This page curates a list of frequently asked questions from our users, friends, candidates, investors, random people, etc.

Essentials

Why did you build Instill Core?

The modern data stack lacks unstructured data processing capabilities.

Processing unstructured data is challenging. We struggled with connecting data from different sources, developing deep learning models, deploying them, managing day-to-day operations, and continually building MLOps tools to maintain pipeline resilience and consistent AI performance. All of this was done in-house and was not scalable.

There must be a better way, and Instill Core is the solution.

To make AI accessible to everyone, the focus should not only be on algorithms (i.e., AI models) but also on the infrastructure tools that connect these algorithms with the modern data stack end-to-end.

For a more detailed narrative, read our blog articles "Why Instill AI exists" and "Missing piece in modern data stack: unstructured data ETL".

Who is behind Instill Core?

We are the Instill AI team. Our team is composed of agile professionals with years of experience in Computer Vision, Machine Learning, Deep Learning, large-scale databases, and cloud-native applications/infrastructure. We possess extensive knowledge in creating and managing complex AI systems.

Prior to the development of Instill Core, we grappled with the challenge of streaming large volumes of data (billions of images daily!) to automate Vision tasks using deep learning models, painstakingly building everything from the ground up.

We've learned that effective model serving for a seamless end-to-end data flow not only requires high throughput and low latency, but also cost-effectiveness. Achieving these criteria simultaneously is no easy feat. Eventually, we successfully constructed a robust AI system in-house that has been operational for years.

The system we built can be broken down into functional components that can be utilized across a wide range of AI tasks and industries. We believe it's time to leverage our experience to make AI more accessible to everyone, particularly those in the data industry.

Is Instill Core open source?

Yes. Instill Core CE (Community Edition) is source-available and free to use under the Instill Community License, with some parts released under the permissive MIT License.

Our goal is to make powerful AI infrastructure accessible to everyone, while also supporting a sustainable business. That’s why we make our core technology freely available for self-hosting, while offering Instill Core (Enterprise Edition) as a fully managed commercial service with additional features and support.

For a detailed breakdown of license types by component, visit the License page.

Is Instill Core free?

Yes — the Community Edition (CE) of Instill Core is free to use and self-host. You can run it on your own infrastructure without any license fees.

If you prefer not to manage your own deployment, we also offer a fully managed version of Instill Core as a paid service, which includes commercial features, SLA-backed support, and scalability out of the box.

Tech

What programming language does Instill Core use or support?

Instill Core's backend components are Go-based and the frontend console is written in Next.js, TypeScript, and TailwindCSS.

However, Instill Core is designed with an API-first and cloud-native approach. It can be interacted with using cURL or the auto-generated code from protobuf. We also offer Python and TypeScript SDKs to help users incorporate Instill Core into their current technology stack. Plans to develop more user-friendly SDKs in multiple languages are in our future roadmap.

What design principles does Instill Core adopt?

The Instill Core leverages the Microservice architecture, making it adaptable, versatile, and reusable, especially in situations where new components are continuously integrated. This architecture also enhances scalability, allowing each backend instance to scale based on its specific workload.

The IDEALS (Interface segregation, Deployability, Event-driven, Availability over consistency, Loose coupling, and Single responsibility) design principle provides a stringent framework for addressing design queries during the development of Instill Core.

The API-first approach is a natural outcome of adopting the microservice architecture and IDEALS. All backend components in Instill Core are designed with the API-first principle, ensuring a solid contract for future integration tasks.

The twelve-factor methodology offers practical guidelines for the development and deployment of Instill Core components.

We envision a future where AI and data industry tooling development is fully modularized. This means that MLOps components should prioritize flexibility, extensibility, and composability in their design principles.