← Previous · All Episodes
AI That Helps Without Harvesting: Spell Checker + Our Privacy‑First Journey at MOSSLET Episode 4

AI That Helps Without Harvesting: Spell Checker + Our Privacy‑First Journey at MOSSLET

· 09:02

|
Speaker 1:

You've probably heard this before. If it uses AI, your data is the product. On Mosslet, that's not true. In this episode, I'll walk you through our newest feature, our AI into a small bootstrap social network without turning your thoughts into training data. Welcome back to the Mosslet podcast.

Speaker 1:

I'm Mark, creator of Mosslet, a small bootstrap privacy first social network with personal private journal. Today is a short episode. I wanna quickly share our new spell checker for your private journal and how we think about privacy first AI and what actually happens under the hood when you tap button like inspire me or upload your handwritten journal entries to digitize them into your private journal. If you're new here to Mosselit, we're built around one simple idea. Your online life should feel more like a private notebook and less like a billboard.

Speaker 1:

So instead of public content, you get a private journal that's just for you, a small intentional social graph if you wanna share with friends and family, although we are branching out more to allow public posts and public sharing and things like that, and tools that help you reflect, like mood insights and gentle prompts. There's no engagement hacks. There's no algorithmic AI feed trying to keep you scrolling. Just a calmer corner of the Internet. Let's talk about the newest thing we've shipped, a spell checker for your private journal.

Speaker 1:

On the surface, it's simple. You write, we quietly underline spelling issues and offer corrections, including a little custom definition dictionary as well. It feels like what you'd expect from a modern editor, But here's what matters, how it worked. The spell checker lives inside your journal in flow. It stays on your device.

Speaker 1:

It's designed for quiet help, not judgment or scoring. We're not using your journal entries to build a writing profile, a marketing segment, or a training data set. The goal is comfort and clarity. You can brain dump messy thoughts then clean them up if you want without worrying that every typo and every vulnerable sentence is being hoovered into some giant model. It sounds small, but this is the pattern we care about.

Speaker 1:

Use smart tools to support you, not to study you. We get this question a lot. Hold on. You have AI features? How is that private?

Speaker 1:

Totally fair question. Most AI powered apps work like this. You upload something deeply personal, a photo, a journal entry, a letter. It goes to a server farm, and then that content quietly becomes training data. Your anxiety journal becomes a row in a dataset.

Speaker 1:

Your letter to your grandmother becomes part of the next model. We wanted something different, AI that helps without harvesting. So we built our approach around three layers. Layer one, send only what's necessary. A few concrete examples.

Speaker 1:

When you tap inspire me for a journaling prompt on Mosslet, we don't ship your entire journal off to an AI. We send one word, your current mood, if you selected one. Anxious, hopeful, tired. That's enough to get a thoughtful prompt. For mood insights, we send dates, mood labels, word counts, like January 10, grateful, 342 words.

Speaker 1:

No actual journal content, no names, no events, just metadata that's enough to spot patterns like, you write more on weekend. The principle is simple. If the AI doesn't absolutely need it, we don't send it. Second layer, process and delete. The trickiest case is handwritten journal pages.

Speaker 1:

To digitize them, the model has to see the image. Here's what actually happens. You upload a photo of your handwriting. The image is sent to a model that converts it to text. As soon as that's done, the image is immediately deleted.

Speaker 1:

We don't see it. Nobody else sees it. The extracted text is encrypted with your personal key before it ever touches our database, and then it's only accessible to you and only you. We don't keep the original image. We don't keep logs of what was in it.

Speaker 1:

It's processed once and gone. For image safety checks, we even have a local model running directly on our own servers. So if the external service we use is down or rate limited, we just fall back to that local model on our own private encrypted network. Either way, we're not building some giant archive of your images. The third layer is contractual protection with our AI providers.

Speaker 1:

When we do send something off to an external API, we route it through OpenRouter with explicit flags like data collection deny and all of the logging, storing, retention policies disabled. Translated into normal language, that means the provider is contractually prohibited from storing your content, using it to train or improve their models, logging the raw text or images, or sharing it with third parties. They receive the request, process it, send a result back, and that's it. It's deleted, gone from memory forever. It's the difference between asking a friend to quickly read something for you versus them secretly reading it, then saving a copy, then sharing it with other friends, and then devising some devious plot to use it against you in the future.

Speaker 1:

Yeah, silly example, but kind of accurate. Some people ask, why not keep everything 100% client side and run AI in my browser? I've heard about this thing called WebLLM. We explored that. Technically, it's becoming possible, especially with things like WebGPU.

Speaker 1:

But today, the vision models that we need that are good enough for giving you a great experience at digitizing your handwritten journal entries and ensuring images you upload are illegal and protecting our community and children and everything, those models are still huge. Sometimes two to eight gigabytes. That would mean that you'd have to download that onto your browser, every single person onto their browser, which is just not a great experience. Your phone would get hot and drain your battery, worse quality than what we can do by doing it the way we're currently doing right now. So it was kind of like, Okay, so we're going to do it this way for a better experience.

Speaker 1:

And then how can we get as much privacy as we can with this process? So for the moment, we've struck what we feel is the best balance, server side models with strict privacy boundaries, minimal data process and delete, and no training on your content. And then hopefully, as models get smaller and better, we're excited to revisit these browser based options and ways to have it just happen right on your device. But I feel really good about this, so much so that I'm using it myself, and I never really trusted any of these other platforms. So we're very excited about our privacy first AI approach.

Speaker 1:

Why does this matter? The AI Gold Rush has trained us to think that we have to choose, either powerful features or real privacy. We're trying to prove that that's a false choice. On Mosslet, you can get journaling prompts without your thoughts becoming trading data. You can digitize your handwriting without your pages living forever on someone else's servers.

Speaker 1:

They're asymmetrically encrypted, so only you ever have access to them. And you can delete them at any time, and they're gone for good. You can see mood patterns without feeding an advertising profile. And now you can even clean up your spelling in your private journal without that writing being turned into a product. Privacy first AI isn't hypothetical.

Speaker 1:

It exists, it works, and it's something small. Bootstrap teams can build it, not just big tech. That's how we're doing it. If you wanna feel what this is like in practice, try it yourself. Create a free Maslet account today.

Speaker 1:

You get a fourteen day free trial, hassle free, cancel anytime. And then once you've signed up and you started your free trial, hop on over to your journal section, tap inspire me, and see what kind of prompt you get, or upload a photo of your handwriting. And as you write in a new entry, notice the new spell checker helping you clean things up, or just check out what a word means. Click on over and see the definition. It's simple, fun, and smooth, and always private.

Speaker 1:

If you got questions about how any of this works, I genuinely love talking about this. We're just a small team. It's really just like two of us. Just reach out to me anytime at support@Masa.com, and I'm happy to give more information and talk more about this and answer any questions you have. Thanks for listening.

Speaker 1:

Here's to privacy first AI that helps without harvesting and to a calmer, more private corner of the Internet, whether you're sharing with the world, friends and family, or simply yourself. See you next time.

View episode details


Creators and Guests

Mark
Host
Mark
Co-founder of Mosslet

Subscribe

Listen to MOSSLET using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts Amazon Music
← Previous · All Episodes