Remember that mind-blowing Google I/O demo of an AI tool that unlocks hidden insights from your research documents? That’s NotebookLM, and it’s not just for tech giants anymore. (See this earlier blog post about what was originally Project Tailwind.)
As a longtime reader of author Steven Johnson (and avid follower of his “Adjacent Possible” Substack), I was thrilled to learn he’s now part of the team at Google Labs bringing this powerful technology to the masses.
Imagine uploading piles of research papers, articles, or even future forecasts (like I did with those year-end reports from Wall Street investment houses forecasting what’s expected in 2024!), and then having NotebookLM not only summarize them but also weave connections you might have missed. That’s exactly what I experienced.
NotebookLM’s “additional questions” feature is a game-changer, prompting me to explore angles I wouldn’t have considered on my own. It’s like having a tireless research assistant with an uncanny knack for spotting crucial details.
Of course, NotebookLM is still in its early stages. The current 20-document limit can feel restrictive, and its future as a paid product is unclear. But for researchers grappling with mountains of information, it’s a game-changer. It’s not just about saving time; it’s about sparking genuine intellectual leaps.
This tool isn’t just for academics, though. Imagine journalists using NotebookLM to connect seemingly disparate news articles, or students piecing together complex historical narratives. The possibilities are endless.
Sure, like any AI tool, it’s not perfect. Fact-checking is crucial, and occasional “hallucinations” can crop up. But NotebookLM’s source citations make verification easier, and its overall accuracy is impressive so far.
So, ditch the highlighter and embrace the future! NotebookLM isn’t just a fancy research tool; it’s a bridge to deeper understanding, more insightful analysis, and ultimately, groundbreaking discoveries. Unleash your research potential – your next breakthrough might just be a question away.
This week at Google IO, one of the projects covered was a new experimental one called Project Tailwind – see how Steven Johnson covered it on his Substack after the event. He’s been working part-time with Google on this project which he describes Tailwind this way:
Tailwind allows you to define a set of documents as trusted sources which the AI then uses as a kind of ground truth, shaping all of the model’s interactions with you. In the use case shown on the I/O stage, the sources are class notes, but it could be other types of sources as well, such as your research materials for a book or blog post. The idea here is to craft a role for the LLM that is not an all-knowing oracle or your new virtual buddy, but something closer to an efficient research assistant, helping you explore the information that matters most to you.
Google’s one line description is: “Tailwind is your AI-first notebook, grounded in the information you choose and trust.”
While working with the existing chatbots (ChatGPT, Google Bard, Microsoft Bing, etc.) is fun and useful, I’d be much happier having a research assistant which would primarily function on content I’ve created with an option to go beyond my content to the wider world. Johnson says he has “found that Tailwind works extremely well as an extension of my memory.”
Google’s initial implementation of Tailwind is based upon files in your Google Drive. For privacy reasons particularly, I’d especially welcome such a feature being trained and used locally on my own computer rather than having to upload my content to Google Drive and a cloud trainer.
I’ve requested access to Project Tailwind and look forward to experimenting with it when it’s made available. Meanwhile, here’s a short video that discusses Tailwind:
I don’t know if you use Snapseed or not but it’s become very much a part of my iPad/iPhone photography workflow.
I initially started using it because it has a Frames tool that lets me simply add a border to an image before uploading to Instagram/Facebook. But I’ve become increasingly addicted to a few of the other editing tools as well (tonal contrast, glamour glow, define (structure/sharpen), and faces. It also has a very nice healing brush as well as a dodge/burn tool that I use in monochromes.
This morning Google updated Snapseed to add a new Curves tool – which does what you think it should do – including allowing adjusting curves by red/green/blue channel. Very nice update/upgrade – this tool had become extremely useful for a mobile only workflow and it’s amazing that it’s all free from Google! If you haven’t played with it in a while, give it a try.
A friend invited me about 10 days ago into Google+, Google’s new “social” service. As many others have commented, it’s very well done for a “field trial” as Google calls it. The UI is very nice – with a couple of exceptions like endless comment streams – and Google+’s handling of photographs is beautifully done. You can get to my Google+ posts by clicking on the G+ icon over at the top of the right sidebar on this page.
Of course, Google+ is still new – and it’s attractive partially just for that reason. It’s sort of like the new restaurant in town. Still, I’m finding that Facebook is getting less of my attention as a result of Google+. How about you?
As for Twitter, I typically keep Twitter running – as a separate app – off on the right side of my display and always in view along with my browser. It’s a parallel feed – and I appreciate it’s “information density” with short posts, no integration of comments, etc.
Facebook, on the other hand, I run in a separate browser tab – a tab that I have to decide to click and go to – just like my email (ugh!). In the “attention economy”, seems to me that’s important – at least in the desktop environment.
On a mobile device, it’s clearly a different story. Each app is “all consuming”! We’ll see how the Google+ iPhone app affects our mobile usage – once that app is released.
Going forward, it’ll be very interesting to see where Google+ goes. Might it replace my separate blog here? Or…?
You must be logged in to post a comment.