A year ago we embarked on a journey to build an AI that’s useful beyond answering questions. An AI that’s capable to get anything done for us, the way we do it as Humans.

AI can reason, process documents, creates images, read charts, generate sounds - but that’s only a small fraction of the capabilities we use as Humans to work everyday.

Most of our time is spent clicking and navigating through websites and applications to collect and input information.

Search, navigates the results, go back, click there, copy that, process, write here, past there. Rinse. Repeat. Such a wasted time.

The big labs promised us a dystopian future where AGI surpasses the smartest among us at everything, but we created Twin Labs to build a future where AI makes our days a little less dumb.

While most of those common tasks we do everyday are intuitive for us, state of the art models fail miserably at them. In the past year, we’ve built a completely new architecture that uses vision, code, reasoning models with a new kind of data - to create an AI that’s capable of acting like Humans.

On November 19th we will reveal our research results, new products and a new paradigm for how we interact with our computers.

Stay tuned.

The Twin Labs team.

PS: we’re recruiting research engineers, research scientists in reinforcement learning, vision, multi-agent, browser and VM engineers, infra/devops engineers, and many more positions.


Feel free to ping us at jobs@twin.so