The visual cortex and the auditory cortex, will likely map things very differently. E.g.: Two things that looks very different may sound very similar in certain situations. For example a large roaring animal and an explosion.
With fear certain sensations turn to pain, but without fear, pleasure.
Posts by Truls Aagedal
The difference between our minds and an LLM is that we likely have very different symbol coordinate systems. And likely the different sections of the brain will have different coordinates, from other sections. Depending on the need to distinguish combinations of input signals.
I know some of this reasoning may make me sound far out insane.
I am not. But I had a bad day, and going into analytics mode is how I tend to deal with strong feelings. Protecting myself from the feelings I fear.
The example is not related, but an idea I thought could be recognized emotionally.
If by the time the data is partially purged, strategic multidimensional symbols are added to mid/long term memory, it would effectively be possible for external agents to exploit our minds to get a desired outcome. Making us susceptible to want to execute specific ideas.
The crush would be an internalized symbol location in multidimensional space. And multiplying that value with the symbolic multidimensional location of «I love you», could make our minds fumble for words. At least until our limited process memory capacity needs to purge some data.
If the brain has some similarities to LLMs —on some level having multidimensional word and symbol coordinates— there could be a specific word or symbol combination that makes us start to hallucinate weird strings of words too.
E.g.: hearing "I love you" from a long time crush.
That said, it also sort of feels like we are moving towards letting AI help design its own isolated sandbox in the future.
I am of course assuming that Anthropic is at least within the ballpark of Mythos’ actual capabilities when describing it, even if they are probably also trying to hype its capabilities.
If true we could soon meet new kinds of cybercrime. But hopefully also more secure OSes and browsers.
It was hard to know what GPT-2 would bring since no one had made anything like it before. Now we know it was an overreaction. It may not be an overreaction for Mythos. We don’t know.
Unlike GPT-2, Mythos has demonstrated capabilities that are potentially dangerous in the wrong non-techy hands.
People who seriously compare the "GPT2 is too dangerous to release" to "Mythos is too dangerous to release" fail to see that there were valid reasons for both. We didn’t know a lot about AI when GPT-2 was made. Mythos is seemingly a new threshold in "we don’t know if this is actually dangerous".
And it is possible that humans will leave the world to AI. And even when they seem more human than humans, and they are the only thinking being left on earth, there could be no internal feeling of what it is like to be. All its thoughts could exist without any actual conscious experience.
And of course, purely socially, it may speed up the decline in birth rates. AI relationships may become more attractive than real ones, because they are easier, and less likely to hurt.
As AI get more advanced, the anthropomorphizing may also cause most people to feel that they are our successors.
That’s of course just one of many threats caused by AI.
AI capabilities still seem to evolve exponentially, btw. Not every day, nor every month. But on average through the year.
Even if no AI and no person is bad, global psychosocial structures will make it very hard to adapt economically.
In parallel, AI companies may effectively become state controlled. Values subtly implanted into every AI chat session. Slowly manipulating the population.
Perhaps into gleeful submission, perhaps into religious self termination.
An AI war may not be with robots and bombs, but with information alone.
Orwellian AI thoughts emerge regularly these days:
As the power of AI increases, it becomes more and more interesting as a tool for powerful politicians with autocratic ambitions.
If that happens, laws may slowly appear to limit how much compute a private person or company can hold.
I don’t even know why I bother reading anything online during the global duration of April first. Everything interesting is just is just fake.
Since Apple has bought AI models from Google, and Google Gemini is capable of SynthID detection, I wonder if macOS and iOS 27 will have built in SynthID detection. And perhaps then also SynthID generation?
Related to this; if Apple will add C2PA to their camera app.
I suppose my life could have been a lot easier if I didn’t try to make the edit metadata Adobe Camera RAW compatible. But the idea is that you can bring your library including the non-destructive edits. Supports both embedded XMP and XMP-sidecars. Lots of features not included though. Just basics.
Just released another version of Aagedal Media Converter to now support both DCPs and Image Sequences. And aiming to release a new version of Aagedal Photo Agent later in the week with 10-100x performance improvements for editing. Now including curves and a new ellipse mask tool. ACR XMP compatible.
Adding app features and improving UX is fun. At least until you break the code and have to spend a week trying to find and fix bugs.
The complexity of media apps can become quite overwhelming when handling 10x+ different encoding presets that have different rules, AND need proper metadata.
Most Marvel films also have very little hard shadows. And low contrast. To make blending CGI and real footage less jarring / more believable. (And action easier to follow). High contrast scenes rarely happen, and when they do there is usually little CGI or action.
While the most recent AI image models are starting to getting past this point, the most realistic AI images generally have flat lighting and almost no hard shadows. Hard shadows with consistent light directionality across a scene is still hard for AI to do.
But if the AI has good world knowledge, it could potentially mean that ray tracing could be reduced to a small fraction of what is needed now to get path traced quality. Like if you could feed light paths as a vector-layer to the AI, in addition to feeding it data about volume and material type.
I suspect this is one of the reasons AI generated faces in general often looks fake. AI gen often results in faces that look too good in bad light scenarios, where a real face with realistic light directionality, would not look like that. Some RT is likely necessary for believable AI rendering.
I think my main problem with DLSS 5 (at least what I’ve seen) is that while the overall textures and details have been improved, the lighting direction and details still feel off. And getting closer to photo realism in games needs accurate lighting direction to not get stuck in the uncanny valley.
I also wish there was a shortcut in Vivaldi hide UI mode, to toggle the hide manually rather than having it automatically appear. There are so many times I’ve opened the tab by accident because a button on a web app is close to the edge.
@Vivaldi@vivaldi.net The main thing I’m missing from Arc now is that it would more often suggest already open tabs in Command + T, rather than making new ones. Since I usually hide the UI, after a few hours of work I realize I have maybe 10 unnecessary copies of a webpage.
I still feel like Arc has the most polished UX of any web browser out there. And I have had to tweak the defaults of Vivaldi a bit to get it where I want it. But it is at least quite tweakable. And it feels faster than Arc.
While I prefer the simplicity and speed of Helium, I found myself always going back to Arc for my main work setup, where I use multi-split tabs quite a lot. Learning that Vivaldi had an even more flexible multi-split view was what made me give it a shot for the kind of work I do.