This week at Google IO, one of the projects covered was a new experimental one called Project Tailwind – see how Steven Johnson covered it on his Substack after the event. He’s been working part-time with Google on this project which he describes Tailwind this way:
Tailwind allows you to define a set of documents as trusted sources which the AI then uses as a kind of ground truth, shaping all of the model’s interactions with you. In the use case shown on the I/O stage, the sources are class notes, but it could be other types of sources as well, such as your research materials for a book or blog post. The idea here is to craft a role for the LLM that is not an all-knowing oracle or your new virtual buddy, but something closer to an efficient research assistant, helping you explore the information that matters most to you.
Google’s one line description is: “Tailwind is your AI-first notebook, grounded in the information you choose and trust.”
While working with the existing chatbots (ChatGPT, Google Bard, Microsoft Bing, etc.) is fun and useful, I’d be much happier having a research assistant which would primarily function on content I’ve created with an option to go beyond my content to the wider world. Johnson says he has “found that Tailwind works extremely well as an extension of my memory.”
Google’s initial implementation of Tailwind is based upon files in your Google Drive. For privacy reasons particularly, I’d especially welcome such a feature being trained and used locally on my own computer rather than having to upload my content to Google Drive and a cloud trainer.
I’ve requested access to Project Tailwind and look forward to experimenting with it when it’s made available. Meanwhile, here’s a short video that discusses Tailwind: