Living Memories Tracy Loftesness

Tracy Ellen Loftesness

In my post “Back to the Future” in January I wrote about the passing of our daughter Tracy just before Christmas 2022. Here is her obituary.


Tracy Ellen Loftesness passed away on December 20, 2022 in Omaha, Nebraska after a courageous battle with esophageal cancer.

Tracy was born on August 18, 1971 in Daly City, CA to Scott and Linda Loftesness. After moves to the east coast and midwest, the family settled in Morgan Hill where Tracy attended Britton MS and Live Oak HS. She played tennis and basketball, joined FBLA, and was in the youth group and choirs at Advent Lutheran Church.

Tracy attended San Francisco State before transferring to Harvard University, earning a BA in Slavic Studies in 1993. She studied at Boalt Law at UC Berkeley, graduating with a JD in 1998. Tracy began a successful law career working at Littler Mendelson, Brobeck, and Hopkins & Carley before joining the research group that became Inventus. Tracy was highly respected by her colleagues for her work ethic, sharp mind, and leadership abilities.

Tracy loved languages, studying Spanish, French, Russian, and a bit of Arabic and German. She honed her Russian skills during a college summer session in Moscow, which included her first (and only) skydiving jump. Tracy later spent a year in Paris honing her French and working as a translator.

Tracy met Joseph Baumler in 2009 during a trip to New Orleans. From NOLA to San Jose to Castro Valley to Genoa, Nebraska,Tracy and Joseph shared 14 years of love and adventure. They enjoyed hiking, catering, and spending time with family and their beloved pit bull, Maggie.

Tracy was known for her generosity, offering steadfast support to any friend or family member in need. She was an advocate for change and, inspired by her grandmother Elizabeth, was an active member of Zonta International. Tracy was a leader and a tireless fundraiser in the Silicon Valley chapter of Zonta and an advisor to the Berkeley chapter.

Despite facing a difficult illness, Tracy never lost her spirit or determination. Her grace, humor, and optimism in the face of adversity is an inspiration to all of us.

Donations in Tracy’s memory may be made to two organizations Tracy was passionate about – Zonta International and Underdog Animal Rescue.

Tracy is survived by Joseph, her mother Linda, father Scott, brother David, sister Kari, her beloved dog Maggie, and many extended family members. We will miss her deeply, as will her friends, colleagues, and all those who had the privilege of knowing her. Tracy’s memory lives on through the lives she touched, the love she shared, and the acts of kindness and generosity done in her honor.

Books Calm Stress

Digital Stressors?

I recently read Chris Bailey’s new book “How to Calm Your Mind: Finding Presence and Productivity in Anxious Times” and found much of it useful. But there was part of the book that didn’t ring true in my life – namely Bailey’s discussion of choosing between analog and digital.

A few days ago I read an excellent review of Bailey’s book by Jim Eager. Jim really focuses on this analog vs digital point:

I think he misses the point when he turns to the question of how to relieve stress, and how to calm your mind. I think he’s picked the wrong battle, by framing it as “digital vs. analog.”

The real question is not analog vs. digital, but how do you avoid and minimize stress and bring more calm into your life?

There are distractions and stress producers in both the digital and the analog worlds. We have to learn to avoid and minimize distractions in both the analog and digital worlds as well. It’s not just an “analog vs. digital” issue.

I recommend reading all of Jim’s review – he’s very thoughtful in his commentary of the book.

AI AI: Large Language Models AI: Transformers ChatGPT

The Best Explainer of ChatGPT

Just came across a great explainer of ChatGPT and the underlying technologies by Stephen Wolfram: What Is ChatGPT Doing … and Why Does It Work?

That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text.

Highly recommended!

AI AI: Large Language Models AI: Transformers ChatGPT


woman in white knitted sweater
Photo by cottonbro studio on

It’s been just over a last week since Microsoft made such a big deal about an enhancement to its Bing search engine that adds OpenAI’s GPT chat capability to it. In the process, Microsoft declared how it expected this new capability to help drive significant market share growth in search.

But it’s become clear in the last week that this kind of chat capability is much less about enhancing search (particularly if you want accurate answers) and much more about generating creative text useful for other purposes. This capability has become known as hallucination – where a chatbot just started string text together. Cade Metz writes: “hallucinate is just a catchy term for “they make stuff up.”

I’ve had some great fun playing around with this to get help writing, for example, short stories. I’ll provide a few sentences to seed the chatbot’s “thinking” and then ask it to complete a 1,000 word short story based on that input I provided. It’s been fun to see what results.

Even more fun has been asking the chatbot to adjust the style to make the writing similar to other famous authors such as Hemingway, Steinbeck, Twain, George Saunders and others. It’s been fun to see the stylistic changes it makes to the same basic story based on the writer’s style that I specify.

None of this work has anything to do with search – nor does it help in any way by enhancing search results. It’s something completely different, strikingly interesting, and a heck of a lot of fun to play with. Whether it’ll be really useful in helping me do any real writing remains an open question – but meanwhile I’m enjoying sparring with a seemingly smart creative “mind” on the other end of my computer screen!

AI AI: Large Language Models AI: Transformers ChatGPT

Attention is all you need – or is it?

woman in white long sleeve shirt standing near white and gray house during daytime
Photo by Julian Jagtenberg on

How important is accuracy? Sort of feels like the pursuit of quality in Zen and the Art of Motorcycle Maintenance!

I’ve been enjoying following the evolution of AI technology which seems to be accelerating at an ever increasing rate. Speaking with a good friend earlier this week, he said “Scott, it really feels to me like the early 90’s – when change was accelerating (the Internet) and we could feel it but didn’t really know what to make of it.” Indeed, it does feel like that again.

After spending some time both playing with ChatGPT, Poe, and others, I’ve come to respect what they’re capable of. But I’ve also come to learn more about what they’re not capable of – namely, dealing with facts in an accurate way. These tools all provide a disclaimer that they may generate inaccurate results – and that their results must be checked for accuracy. And for good reason. Once you understand how the large language models work, you can understand why.

In my simple understanding, what these LLMs do is get trained on very large corpuses of textual data – like the “whole Internet” – and that training is then “validated” by humans who test it with lots of queries and inspection of the generated results. That combination of training and verification is used to essentially set the weights inside the model which then are used in a kind of simplistic way to generate text – by moving from word to word (or word fragment to word fragment) and “writing” answers to queries. In other words, they’re using their training to come up with the best possible next word to output given the query they’ve been given.

Clearly, the best possible next word isn’t necessarily an accurate one. Rather it’s one that the model has seen most frequently. Because of this fundamental characteristic in how these things work, you can get results that aren’t necessarily accurate. As a personal example, it asked ChatGPT to give me a history of a small town where my father grew up. I knew that history – including where the name of the town came from. ChatGPT gave me the wrong attribution for the name of the town. But when I asked a bit differently whether the name actually came from the location I knew to be correct the model came back and agreed with me.

In my mind, coming to better understand these characteristics has helped me understand where these kinds of models may be very useful and other situations where they might be less useful. For example, asking one of these LLMs to help write a short story is a very good use. See an earlier example I wrote about using ChatGPT to write haiku poetry. That use case isn’t one that depends on any accuracy – it’s just one where clever use of text is all you want. Similarly, I can see how using LLMs to help write computer code (like GitHub Copilot) makes a lot of sense. But I question how much value an LLM can bring to helping deliver better search results – which is the current basis for what Microsoft is trying to do with its new Bing and enhancements to the Edge browser. It’s less clear that I want potentially inaccurate results from an LLM being delivered to me in response to search.

It’s early days to be thinking about all of this – and I’ve got a lot more to learn. Meanwhile, I’m really having some fun using these tools to help me write short stories – based on a start that I provide and then asking for the result to mimic the writing style of a Hemingway, Steinbeck or Didion. That kind of creative use definitely is helped using these tools. Getting accuracy in search results seems much less promising.


AI Chatbots are the new attention getters

The wave of AI chatbots continues to build – with the recent public release of ChatAPT, the new Poe app from Quora, today’s Bing announcement from Microsoft and whatever might be coming from Google, Facebook, Apple and others.

I’ve been doing quite a bit of experimenting with ChatAPT in particular – as you can see from some of my recent posts. Today I’ve been spending time with Poe on my iPhone – it’s a really well done app that makes it even easier to interact conversationally with AI chatbots (Poe supports multiple chatbots).

One thing is clear from my play – these new “tools” are true “attention sinks.” As you ask questions and research a topic, the ability to interact and dive deeper is quite addicting – and time consuming! It’s all an interesting experience.

With the potential for significant monetization for companies that can captivate our attention through their apps, the time we spend with AI chatbots may come at the expense of time spent on other apps, which could impact the ability those other apps to continue to monetize at the same level. This shift will be intriguing to observe.

iPhone 14 Pro Max Menlo Park

A Rainy Walk in the Park

Yesterday we had some periods of rain – including a brief spattering of tiny hail. During a break in the rain, I headed out on my morning walk – only to have rain move in again by the time I reached the park. I enjoy walking in the rain – if it’s not too cold and not too windy – so I enjoyed it.

I wanted to play around a bit with Cinematic Mode on my iPhone 14 Pro Max. I made several movies – a couple of them included below. It is interested to see how the water and the splashes from the raindrops made focusing challenging for Cinematic Mode. I didn’t override anything manually – these two videos are straight out of the camera.

Sharon Park in the Rain #1
Sharon Park in the Rain #2
AI AI: Diffusion Models ChatGPT

Diffusion Models and Image Creation

I’ve been doing some more exploring with ChatGPT – this time asking it to educate me on how diffusion models work – they’re the underlying technology behind apps like DALL-E, Stable Diffusion, Midjourney, etc. Below is what I learned during my conversation with ChatGPT. The image below was created using DiffusionBee, an app based on Stable Diffusion. I used the input: a colorful illustration of how diffusion models work.

AI AI: Large Language Models AI: Transformers

Large Language Models and Transformers

This morning I noticed an article titled “Understanding AI” in the Financial Times about an upcoming two-week series on artificial intelligence. That prompted me to just ask ChatGPT to write a 1,000 word blog post describing large language models (LLM) and transformers. Below is what I got back from ChatGPT…a pretty useful description of both! By the way, today’s Wall St. Journal has an article titled “Sports Illustrated Publisher Uses AI to Write Articles” that discusses how some publications are beginning to use AI chat bots to generate stories for publication.

Also earlier this week I watched a YouTube video from Stanford with Professor Christopher Potts titled GPT-3 and Beyond – a bit technical for me but he provided a fascinating introduction to the progress in AI.

Large Language Models and Transformers

Large language models and transformers are artificial intelligence technologies that have been revolutionizing the field of natural language processing. These models are capable of performing complex language tasks such as text classification, question answering, and machine translation. In this post, we will provide a comprehensive overview of large language models and transformers, their history, how they work, and their current and future applications.

A brief history of language models

The study of language models has a long history dating back to the 1950s when the first attempts to build machine translation systems were made. At that time, the focus was on building rule-based systems that relied on pre-defined grammar and syntax rules to translate between languages. However, these systems were limited in their ability to handle ambiguity and variability in natural language.

In the late 1980s, the field of machine learning emerged, and language models started to use statistical techniques to improve their performance. Over time, researchers have developed more sophisticated models that can learn from large amounts of data and achieve better performance on various language tasks.

Introduction to large language models

Large language models are deep neural networks that are trained on massive amounts of text data to perform a wide range of natural language processing tasks. The key innovation behind these models is the ability to process and understand the context of words and phrases in a sentence. This allows them to generate more accurate predictions and perform more complex language tasks than traditional rule-based systems.

Large language models are trained on billions of words from a vast variety of text sources, including books, news articles, websites, and social media. This enables them to learn the patterns and relationships between words and sentences, as well as the meaning of words and phrases in different contexts.

Introduction to transformers

Transformers are a type of large language model that have been specifically designed for natural language processing. They were introduced in 2017 by Vaswani et al. in a paper called “Attention is All You Need.” The key innovation behind transformers is the attention mechanism, which allows the model to focus on specific parts of the input sequence when making predictions.

Traditional language models process sequential data by breaking it down into smaller pieces and processing each piece in a linear fashion. This makes it difficult for the model to consider the relationships between words that are far apart in the input sequence. Transformers, on the other hand, use attention mechanisms to allow the model to weigh the importance of different parts of the input sequence when making predictions.

How transformers work

Transformers work by first encoding the input sequence into a set of continuous vectors. These vectors are then processed by a number of self-attention layers, which use the attention mechanism to focus on specific parts of the input sequence when making predictions.

The attention mechanism in transformers works by calculating the relationships between all pairs of words in the input sequence. The model then uses these relationships to weigh the importance of different parts of the input sequence when making predictions. This allows the model to consider the context of words and phrases in the input sequence, leading to more accurate predictions.

After the self-attention layers, the encoded vectors are processed by a feed-forward neural network to make the final predictions. The feed-forward network includes a number of dense layers, which are fully connected layers that allow the model to make more complex predictions.

Applications of large language models and transformers

Large language models and transformers have a wide range of applications in the field of natural language processing. Some of the most notable applications include:

  • Text classification: Large language models can be used to classify text into different categories, such as sentiment analysis, topic classification, and spam detection.
  • Question answering: Large language models can be used to answer questions by extracting relevant information from a text corpus and providing a concise answer.
  • Machine translation: Transformers have been shown to be highly effective for machine translation, as they are able to handle the complexity of translating between languages and capture the relationships between words and phrases in different languages.
  • Summarization: Large language models can be used to summarize long documents into shorter versions that retain the main information.
  • Dialogue systems: Large language models can be used to build dialogue systems that can understand and respond to human language in a natural way.
  • Generative models: Large language models can also be used for generative tasks, such as writing creative fiction or composing music.

Current and future developments

In recent years, there has been rapid progress in the development of large language models and transformers. This is due in part to the availability of massive amounts of text data and the advancement of computational resources, which have allowed researchers to train models that are larger and more complex than ever before.

One of the current challenges in the field is reducing the computational cost of training and using large language models. This is important because large language models require vast amounts of computational resources, and their deployment is limited by the availability of these resources.

There are also ongoing efforts to improve the ethical and social implications of large language models and transformers. For example, there is concern about the potential for these models to reinforce biases and perpetuate harmful stereotypes. Researchers are working to develop methods for mitigating these risks and ensuring that large language models are used in a responsible and ethical manner.


Large language models and transformers have been making a significant impact on the field of natural language processing. Their ability to process and understand the context of words and phrases in a sentence has led to advances in a wide range of applications, including text classification, question answering, and machine translation. As these models continue to evolve, we can expect to see even more exciting developments in the future.

Apple General Magic Innovation

Magic in the Air

I’m not sure what provoked my interest earlier this week, but I enjoyed watching the documentary from a few years ago on the history of General Magic. Living in Silicon Valley, I remember hearing stories about the company – and the talented people working at the company led by CEO Marc Porat.

Watching the documentary provided some great background into how the company innovated in so many ways yet wasn’t able to be a commercial success. Lessons learned included how great ideas fail when they’re too early for the technology to support them, the implications of trying to partner with other large companies, and the dangers of competition that “fast follows” on your ideas.

Last night, I stumbled across another fun video produced by the Computer History Museum about the history of MacPaint, Quickdraw, and more as told by Bill Atkinson and Andy Hertzfeld. They were both early employees at Apple, very important software developers on the Mac platform, who left Apple to be early founders of General Magic. Listening to them talk about the early days of development at Apple was very interesting.