two images of the human body's circulatory system. One of them with good cable management
The human circulatory system, before and after proper cable management.
two images of the human body's circulatory system. One of them with good cable management
The human circulatory system, before and after proper cable management.
In a recent poll of 10x users more than half didn't want any AI
10x wasn't written with AI, doesn't contain any AI and doesn't upload your code anywhere.
If you choose to use AI agents that's fine, it will work with 10x. But 10x itself is free of AI.
Just to be clear, 10x doesn't contain any AI.
The recent update adds an MCP server that allows any agents you're running to communicate with and control 10x.
The new Terminal feature allows you to open terminals in 10x, which allows you to run CLI AI agents within 10x.
10x now supports error underlining
It turns out that AI agents want to use 10x too. They need fast code navigation as much as we do.
Stop waiting for AI to grep files and give it access to 10x.
You can now run AI agents inside 10x
10x Release (1.0.458)
- AI Agent support
- Terminals
- Error underlining
If it has bugs or missing features use AI to improve it
The vim.py script that 10x uses is maintained by the community. We've decided to let AI write and maintain its own script to see if it does better. The AI script is here:
github.com/Phildo/10x/b...
If you're a 10x vim user give it a try and leave some feedback.
Yup, as a daily user, I can attest to this comment. If you wrote C or C++ code, get 10x. You won’t be disappointed
It's getting too powerful.
Been using this a few weeks now early access and it seems so simple but this is such a fantastic feature. The iteration time saved from doing a search, opening up the files one at a time to inspect and then closing them all to clean up... compared to doing this.
10x Search panel now has Source Preview!
10x Release (1.0.428)
Added source preview to Search panel. Support for slnx files. Improved font rendering. General optimisations and bug fixes.
Most products get slower as they age. When you have a large team of programmers working on new features and adding complexity this is almost inevitable.
10x is getting faster as it ages. Whatever I'm working on, every time I notice something slow, I stop what I'm doing and fix it.
I see 10x everywhere these days. Free marketing.
I just watched an interview with someone saying AI literally makes programmers 10x more productive.
I chose the name 10x Editor before AI was a thing. Really pleased that the 10x programmer meme is still going strong.
10x Editor Release Version (1.0.404)
General fixes and parser fixes. Improved 10xEditorLink syncing. Improved quality of map scroll bar rendering. Crash fixes.
Apologies if you already know. I see you're on VS. Have you looked at 10X? I switched with the new job in Jan and was floored how good it is. Close integration with VS still, F9 breakpoints, F7 build, F5 run etc and you still use VS to debug. But it's so good for writing.
bsky.app/profile/stew...
I don't have any definite plans for Linux support yet. Vim is implemented in a python script which is developed by the community github.com/slynch8/10x/...
New 10x Editor build (1.0.378)
A lot of changes in this one. Mainly focusing on stability and bug fixes. And many improvements to the parser.
Live++ 2.11.0 out now:
liveplusplus.tech/releases.html
- Support for virtual file systems
- Support for 10x by @stewartlynch8.bsky.social
- Environment variable set for invoked compiler
- Hot-Restart via exception handler dialog
- Smaller fixes
#cpp #gamedev
What's your caps lock key mapped to?
Mine is:
`10x.exe d:\dev\10x.sln`
Which focuses the 10x window with my workspace
No more searching for it with alt+tab
Context switches are back!
It looks like Windows Update 24H2 broke the context switch recording in FramePro. But I found a work-around and uploaded a new build.
FramePro (1.11.0)
- Fixed context switch recording
In all there were 21 optimisations. The hardest part of making sure that nothing got broken in the process.
- Tweaking memory compression variables to reduce compression cache thrashing
- Reorganising data to be more lock friendly
- Caching of frequently requested data to avoid hot lock paths
Reducing locks got a good speed-up on it's own, but also meant the max thread count could be increased.
The optimisations were mostly related to removing lock contention and allocations:
- Custom allocators to avoid hitting global alloc
- More pools for struct and array re-use
- Created new multi-lock large hash map container
...
This mostly affects the initial parse, which can take a few minutes for very large projects, it will now take half the time.
Last week I released an important update and forgot to post about it!
3 months work of profiling and optimisation finally finished:
10x Release (1.0.366) - Parser X2 Speedup