/ Voiceofvr / 0浏览

第三部分:杰森·马什关于与Flow Immersive一起讲述数据故事(2025)

The Voices of VR Podcast

Hello

My name is Ken Bai, and welcome to the Voices of VR podcast. It’s a podcast that looks at the structures and forms of immersive storytelling and the future of speech computing. You can support the podcast at patreon.com/voicesofvr.

Introduction

Continuing my series on AWE, past and present, today’s interview is part three of three in my conversations with Jason Marsh, the founder and CEO of Flow Immersive. I had a chance to run into Jason again at the Augmented World Expo, as I have on several occasions over the years. He showed me the latest demo on the X Real glasses, which he pulled out of his pocket. I was able to see not only the data stories, like energy use across various contexts, but also what Caitlin Krause is doing with quantified self data—taking all this data from your body and being able to analyze it to understand the different patterns of your data over time.

That was a completely separate application, and I had a chance to deep dive with Caitlin into other projects she’s working on in mindfulness and digital well-being. I’ll be discussing some of those conversations with Caitlin at the end of this series. In this conversation with Jason, it was very quick, given that it was the beginning of the day and he had a full schedule of back-to-back demos. I managed to get a demo and squeeze in an interview to get a sense of his ongoing journey in the XR space, utilizing the latest technology platforms and continuing with a WebXR approach of JavaScript-based development, while using tethered phones connected to the birdbath smart glasses from X Real.

So, covering all that and more on today’s episode of the Voice of VR podcast! This interview with Jason happened on Thursday, 06/12/2025, at Augmented World Expo in Long Beach, California. Now, let’s dive right in.

Jason Marsh’s Introduction

Hi, I’m Jason Marsh. I’m the founder and CEO of Flow Immersive. We do data visualization in augmented reality, specifically focusing on the best possible conversation around data and data collaboration. Our goal is to get ideas into your audience’s head in a much more consistent and powerful way than PowerPoint, which we all forget the minute we leave the room. We’re leveraging spatial reasoning and understanding to have those great conversations that help drive business decisions and solve problems in the world.

Background and Journey

Oh, I founded Flow Immersive nine years ago, and I had been working in this space for a couple of years before that. It’s been quite a long journey at this point, and really exciting along the way. Before that, I actually began my career at Apple Computer in 1991, working on speech recognition. I’ve been programming for quite a while and have always focused on the enterprise space. Throughout the years, we’ve had numerous interviews, and as part of this conversation at AWE, my intention is to air everything in sequence. We’ve talked about VR, AR, different devices, and now we’re focusing on the X Real’s six-dof use. It feels like a really good form factor for what you’re doing, and I’m curious to hear your thoughts on the latest iteration of using the X Real glasses for data visualizations.

The X Real Glasses

The X Real glasses are really nice—definitely ahead of the curve and indicative of where I think the industry is headed. For our data visualization use case, we’re looking at conference room tables around the world. In that environment, larger headsets just don’t feel human or natural for interaction. On the other hand, glasses are part of the human technological experience in a very intimate way. We’ve been putting things on our faces for hundreds of years, so we’re very comfortable engaging with technology this way. Glasses resonate with our human experience differently compared to larger headsets.

In years past, we’ve had various systems from six-dof controllers to some hand tracking. With the X Real glasses, you essentially have a setup where you can use a tablet or a Samsung phone as a 3D off-controller while rendering all the content from the phone to the glasses. There seems to be a broader trend of offloading computation onto the glasses or something like a puck or phone. Your demo appears to show the future of where this is going, but I’d love to hear your reflections on this latest form factor that has its limitations in three-dock tracking compared to six-dof tracking. However, it is very self-contained and portable—you can give someone a demo pretty much anywhere with minimal setup.

I’d love to hear your latest thoughts on this form factor that uses a phone tethered to glasses to create a more spatial experience of the data.

Thoughts on Current Technology

This generation of X Real glasses that are tethered—whether to a phone or puck—has many technological advantages, such as lighter load and processing. Particularly with smartphones like Samsung or iPhone, which have considerable processing power. Putting that directly onto a device on your head remains a tough technical challenge. From what we’re hearing from the market, this setup seems likely to be the norm for the next six to eighteen months until we transition to waveguide technologies, like Meta’s Orion glasses, which are intended to be untethered and wearable all day. So, we see these current glasses as transition devices.

In relation to headsets like the Quest, which are dedicated devices you put on for specific experiences rather than wearing all day, these glasses are much more convenient and comfortable. For example, when I ran into you in the hallway at AWE, I could approach you in a suit jacket and say, ‘Would you like a demo?’ while having my hands completely free. I’d just pull out the glasses from my pocket, and that feels so natural. Plus, you don’t have to worry about boundary setup or anything complicated; you can dive right into different data visualizations very quickly.

There’s still room for improvement regarding the vertical field of view, which I think has been a limiting factor in previous iterations. Although the experience of data was more immersive, it’s still good enough to visualize information further away. You’re continuing to explore different issues and visualizing comprehensive data points, such as graphs and immersive graphs, and also comparing different types of data relative to each other. Could you elaborate on the energy use case you’re working on? What context did it come from, and how is it being utilized in a professional enterprise context?

Energy Use Case

One of the large consultancies had us compile an analysis of EV adoption, looking at all its different aspects—where the power comes from, the generation lines, the location of charging stations, how to address range anxiety, and the costs of ownership concerning electricity. We considered how these elements interact, plotting them all on maps with a high degree of interactivity to filter different sizes of transmission lines, various types of power plants, and zoom into specific geographic areas, layering it all at once on the same visualization. This approach helps unveil critical relationships—where you generate power versus where it gets used. Timing is also significant; for instance, when are people charging their cars? As we shift towards more solar and wind energy, it’s advantageous to charge during the day but people tend to use power during those same hours. These relationships are fascinating, and interacting with them as they float over the table between you and your audience represents the future of our work.

We’ll touch on AI shortly, but I also want to emphasize the collaborative nature of our work. Collaboration and communication are the missing pieces in our current data landscape. Dashboards get people excited, but they’re created and often underutilized, typically not used by upper management who need them for informed decision-making.

The Power of 3D Visualization

Would you like to discuss some reasons why 3D might be a better way to visualize data in augmented reality compared to 2D?

Absolutely. When you mention collaborative social experience, does that imply that some of these applications are networked so multiple people can engage with the same data simultaneously in a shared virtual space?

Absolutely. With our system, users can be on a Quest, HTC, different headsets, phones in AR or not, or on a laptop. It’s fully native in web and C# within Unity. Everyone sees each other, along with laser pointers from everyone else. When anyone interacts, whether clicking filters or using legend systems, all users see that same interaction in real time, creating a concrete expression of symbolic information shared in the same mental and physical space simultaneously. It feels like the movies, where everything interacts seamlessly.

The Compelling Nature of 3D Data Analysis

Now, why is 3D such a compelling case for analyzing data? One core idea is that on a flat screen, it’s challenging to manage visual crowding when dealing with details, which are critical as they often carry the risks. For instance, if you’re usually reviewing quarterly sales results in PowerPoint with a bar chart, you’ve overly summarized the risks inherent in that visualization. If you drill down into daily, monthly, or further—right down to transaction levels, you’re unlikely to do that on a flat screen due to visual clutter. But with stereoscopic vision and the ability to lean in with real-time filtering, you can identify outliers. You might discover that three transactions saved your quarter—a critical insight for a CEO. Ultimately, being able to understand risks and view the relationships surrounding them are significant benefits of using 3D.

Collaboration on Digital Well-Being

I had a great conversation yesterday with Caitlin Krause about digital well-being. It sounds like you’ve been collaborating with her and MindWise on data visualizations related to the quantified self—gathering data on what’s happening in your body and translating it into various emotional spectrums. I’d love to hear more about this collaboration with Caitlin concerning MindWise and the visualization of data from the body.

This is indeed a fascinating use case for these glasses. The glasses form factor is being adopted widely as various organizations release similar products over the next few months to a year. Essentially, we collect data from smartwatches, Bluetooth scales, and even devices like the Oura ring, which provide quantifiable metrics about you. But instead of simply presenting that data as ‘cool’ in 3D—though it is, indeed cool—we incorporate AI to allow interaction. Caitlin’s work adds to this by integrating it into mindfulness practices, expressing our joy in being alive and grounding it in science. This collaboration seems like a unique combination that could make a meaningful contribution to both the XR community and a broader audience, showcasing the value of glasses in delivering a different experience compared to just using phones.

AI in Data Visualization

This morning, I posted on LinkedIn about my feelings at AWE regarding the reliance on AI as a savior for XR technologies. I expressed concerns about the peak of this hype cycle creating a collective delusion, especially with AI being considered a threat or a savior. Then you approached me wanting to show a demo of how you’re using AI for data visualization. I’d love to hear your thoughts on that.

Currently, AI serves as a fabulous tool. You can prompt AI to process data in various ways—like filtering specific time frames, executing a MACD analysis of stock indicators, or running a random forest regression for pharmaceutical use cases. It can even generate the Python code, process it, and return the information to a 3D space, helping you comprehend data in ways that are simply unattainable without such tools. So, it’s not a savior or a threat; it’s just an exceptionally useful tool for us, and I believe that will be its greatest short-term value.

When the context aligns with the data, AI performs well; however, when left unbounded, it tends to lead to hallucinations. The challenge lies in ensuring that AI interfaces with large models without hallucinating.

Future of Flow Immersive

Lastly, any final thoughts on where you see this all going and how Flow Immersive plans to contribute to the future of immersive technology?

It’s fascinating to be ahead of the market for numerous years and finally see everything coming together. For us, it feels like we’re on the brink of becoming an overnight success after a decade of hard work, which is exciting. I didn’t realize how beneficial AI would be until we started using it; now it feels like that Jarvis experience from Iron Man, where technology feels more human than high-tech. It’s an unexpected twist that brings a sense of humanity to the outcomes.

Conclusion

Awesome! Jason, it was a pleasure to check out your latest demo and catch up with you again. Thank you for all the work you’ve done over the past nine to ten years to advance the dream of how XR technologies offer insight as we navigate data patterns. Thanks again for joining the podcast!

My pleasure. Thank you for listening to this episode of the Voice of VR podcast! If you enjoy the podcast, please spread the word, tell your friends, and consider supporting us on Patreon. This podcast relies on donations from listeners like you to continue providing coverage. You can become a member and donate today at patreon.com/wistvr. Thanks for listening!