Oh dang! Sorry yes, this is STT not TTS ๐คฆโโ๏ธ Sunday evening; I should be taking a break. Sorry about that. TTS isn't possible and is NOT on our immediate roadmap.
Posts by Msty
Just add an OpenAI API key and you should see this:
We don't at least not with Local AI models if that's what you are looking for. However, we do support TTS with OpenAI key.
Run AI locally in 5 mins! No subscriptions, no data concerns - just your own private AI assistant with MSTY. Super simple setup, even offline chatting! https://youtu.be/xATApLtF92w #LocalAI #Privacy
Msty App 1.8.0 is now available - full text search, Gemma3, Modular Rendering Engine, Mermaid Diagrams, Claude 3.7 Thinking, GPT-4.5. Preview and much more!
t.co/OGgID2NwPB
Here's a sneak peek at one of the many videos about Msty web! First one is the most important one - saving your data privately, securely BUT Locally!! More videos coming over the next few days that cover other new exciting features (4 videos up already).
Breaking AI Censorship: See how #perplexityai R1 1776 handles sensitive topics with surprising transparency, while DeepSeek R1 stays cautious. Side-by-side comparison reveals everything using Ollama and @msty.app youtu.be/MdI_qVHJSGg #AI #MachineLearning
Msty ver 1.7 is now available:
Toggle Model Providers, reasoning tokens for Open AI o[X] Models, DeepSeek as a Remote Models Provider, Model Instructions as Developer Prompts for Open AI o1 and o3-mini models, Updated Perplexity Model IDs, signed Windows installer, and more. ๐
โ
Show collapsing UI for thinking models from @openrouter.bsky.social
โ
See thinking time as well as copy the text
โ
Simplified Chinese
โ
Russian
โ
Improved LaTex
โ
Better model downloading progress UI
And more!
msty.app/changelog
Less than half a week and we are back with new release ver 1.6 where we continue to improve our support for the new #reasoning #thinking models like #DeepSeek
โก๏ธWe have release 1.5.0 with new faster rendering engine, thinking reasoning tags, better RTD responses, better app performance, and a few more.
msty.app/changelog
Would love to see @msty.app mentioned in the survey. Regardless, please take a few minutes to fill up this survey if you are using Gen AI
How much do you use AI tools in your daily work? How much do you rely on it for new websites (e.g. AI prompt to generate the boilerplate code for a website, etc.)?
on a daily basis I typically use @github.com copilot, and @msty.app for questions (mostly not about code).
I wouldn't use AI tools to bootstrap a new project - I enjoy doing it by hand too much.
if anyone (non-AI) offered to 'do' a first draft for me, I'd probably say no too ...
#ama
roe.dev/ama
I started out using @lmstudio-ai.bsky.social for running local models, but I recently switched to @msty.app. No issues. Document attachments and knowledge stacks improve the results. #AI #GenAI
peterwoods.online/blog/msty-cr...
Tinyllama is really a bad model and hallucinates a lot. It is only for running quick tests of any. We suggest using a Gemma model or Mistral Nemo. Mistral Nemo is actually very very good.
Started testing out msty to test out LLM models and streamlit to stop me from pulling out my teeth with front end programming boredom. Respect to the engineers who have the patience for it and do a good job. If youโre in the product space and testing out llms Msty.app is worth a look ๐
Setting a good system prompt goes a long way.
#DeepSeek Reasoning #R1 models are now available in Msty. You can find 7B reasoning model on the featured list and other models under #Ollama Models Hub.
When the form meets the functions
This year felt like a discovery phase for usโweโve learned so much about what works, what doesnโt, what users need, and where AI is headed. With this clarity, our team is excited to build even more incredible features with you all in the year ahead. See you in 2025!
2024 has been one of the best years. In less than 9 months we achieved so much. However, as amazing as this year was, we truly believe Msty in 2025 will eclipse this year by a large margin.
Wishing you all a very Happy New Year 2025, and thank you for making 2024 such an incredible year. It was wonderful having you here and getting to know so many of you - here and on our Discord server.
There is more! Check out the full changelog here: msty.app/changelog
Merry Christmas and Happy Holidays to you all! Thanks for a wonderful year and we're looking forward to seeing you again next year, which is going to be even more amazing!
New: Prompt Caching with Claude models (Beta)
New: Korean language support (work in progress)
New: Gemini models support
New: KV cache quant enabled in models
New: Support for Cohere AI
New: Model compatibility gauge for downloadable models
New: Remote embedding models support
New: Bookmark chats and messages (Aurum Perk)
New: Network Proxy Configuration (Beta)
New: Local AI Models (Llama 3.3, Llama 3.2 Vision, QwQ, etc)
๐ Here's a few hours early Christmas Gift to you all! Msty 1.4 is now released! It brings a number of new features and improvements:
๐
Itโs even easier than that. Just use Msty.
Ollama WebUI (open source), AnythingLLM and Msty (I use this one) allow to do this in case if you need some inspiration