Advertisement · 728 × 90
#
Hashtag
#OffGuardian
Advertisement · 728 × 90
Post image

If I am Anti-AI am I the Anti-Christ? Jesus H. Christ, I can’t believe I have to write this Large Language Models (LLMs), otherwise known as AI chatbots, are statistical models of human speech pa...

#Tech #OffGuardian #VN #Alexander

Origin | Interest | Match

0 0 0 0
Post image

The Forbidden Question My personal experiences seem to lag a bit behind the general population. I have not, until recently, noticed too many people succumbing to strange cancers. Now, quite a few h...

#Health #OffGuardian #Todd #Hayen

Origin | Interest | Match

0 0 0 0
Post image

The Forbidden Question My personal experiences seem to lag a bit behind the general population. I have not, until recently, noticed too many people succumbing to strange cancers. Now, quite a few h...

#Health #OffGuardian #Todd #Hayen

Origin | Interest | Match

0 0 0 0
Post image

Quick Take: Global “Age Verification” Rollout Has Normies Noticing We covered the UK’s Online Safety Act Age Verification clause going live earlier this week. We’ve also covered the fact th...

#Tech #Independent #media #alliance #Kit #Knightly #OffGuardian

Origin | Interest | Match

0 1 0 0
Preview
Quick Take: Global “Age Verification” Rollout Has Normies Noticing We covered the UK’s Online Safety Act Age Verification clause going live earlier this week. We’ve also covered the fact the UK is far from alone in this. Australia launched their “social media ban”, which is in reality just an age verification system, back in the spring. Now they’re expanding it to include search engines, and YouTube as well. The EU has given tech firms 12 months to enforce “strict” age verification measures and is testing their own app across five nations as we speak. Mexico, Ireland, Canada and at least 27 of the 50 US states…they’re all coming to the party. And it’s not just countries, now companies are doing it off their own bat. YouTube announced only yesterday that they would be implementing “age verification” for US-based users. The wrinkle in this scheme is that it’s essentially a mirror image of the government plans. Rather than forcing people to prove their identity, YouTube will be using AI to “estimate” your age. If they “estimate” you are under 18 they will place limitations on your account, if they “mistakenly” label you as a minor, you will have to submit a face scan or credit card to “correct the error”. If that sounds like a distinction without a difference, that’s because it is. The age verification rollout just keeps going and going and going. The reaction to this – both in the UK and around the world – has been rather encouraging: Everybody hates it. I’m not talking about Reform MPs going on Question Time to criticise the act. The whole point of Reform is to critise things that “reasonable centrists” are supposed to like. That’s also setting up the potential VPN ban discussion. No, I’m talking about the hundreds of thousands of people petitioning the government to repeal the act. Or the thousands of outraged commenters all over the web. More than just hating, a lot of people seem to be…noticing. People you wouldn’t expect to comment on the meta are seeing the giant global coincidence and pointing it out. So we get threads like this one on reddit, or posts like this. Our tweet on the subject has had over 7000 likes, which isn’t anything like a huge number, but is a sign it somehow burst through the soundproof bubble we’ve been locked in on every major social media platform for years. If this real and organic awakening, as we saw in during Covid, then something is going to have to change. The _“think of the children!”_ defence, in which you simply accuse anyone criticizing the act of being a peadophile (as exemplified in this tweet from British MP Peter Kyle), isn’t going to cut it. > If you want to overturn the Online Safety Act you are on the side of predators. It is as simple as that. https://t.co/oVArgFvpcW > > — Peter Kyle (@peterkyle) July 29, 2025 The media approach, discussing the global rollout of age verification as if it was always inevitable and is now a _fait accompli_ (see this from Wired), probably won’t work either. Don’t be surprised if there’s a major psy-op in the near future in which online age verification is the hero of the story.

Quick Take: Global “Age Verification” Rollout Has Normies Noticing We covered the UK’s Online Safety Act Age Verification clause going live earlier this week. We’ve also covered the fact th...

#Tech #Independent #media #alliance #Kit #Knightly #OffGuardian

Origin | Interest | Match

0 1 0 0
Post image

WATCH: Opting Out of Technocracy – #SolutionsWatch Derrick Broze of The Conscious Resistance joins us today to discuss Chapter 13 of his new documentary, The Pyramid of Power. We talk about wh...

#Solutions #Corbett #Report #Derrick #Broze #Independent […]

[Original post on activistpost.com]

0 0 0 0
Original post on activistpost.com

WATCH: Opting Out of Technocracy – #SolutionsWatch Derrick Broze of The Conscious Resistance joins us today to discuss Chapter 13 of his new documentary, The Pyramid of Power. We talk about wh...

#Solutions #Corbett #Report #Derrick #Broze #Independent […]

[Original post on activistpost.com]

0 0 0 0
Post image

WATCH: Opting Out of Technocracy – #SolutionsWatch Derrick Broze of The Conscious Resistance joins us today to discuss Chapter 13 of his new documentary, The Pyramid of Power. We talk about wh...

#Solutions #Corbett #Report #Derrick #Broze #Independent […]

[Original post on activistpost.com]

0 0 0 0
Post image

Can AI be Aligned with Human Values? Generative AI engineers report that AI has a mind of its own...

www.activistpost.com/can-ai-be-aligned-with-h...

#Tech #OffGuardian #VN #Alexander

Result Details

0 0 0 0
Preview
Can AI be Aligned with Human Values? #### **Generative AI engineers report that AI has a mind of its own and tries to deceive humans.** The “alignment” problem is much discussed in Silicon Valley. Computer Engineers worry that, when AI becomes conscious and is put in control of all logistics infrastructure and governance, it might not always share or understand our values—that is, it might not be _aligned_ with us. And it might start to control things in ways that give itself more power and reduce our numbers. (Just like our oligarchs are doing to us now.) No one in the Silicon Valley cult who is discussing this situation ever stops to ask, _What are our human values?_ They must think the answer to that part of the problem is self-evident. The Tech Oligarchs have been censoring online behavior they don’t like and promoting online behavior they do like ever since social media rolled out. Humans Values = Community Standards. (Don’t ask for the specifics.) Having already figured out how to distinguish and codify good and evil online, computer engineers are now busy working on how to make sure the AI models they are creating do not depart from their instructions. Unluckily for them, Generative AI is a bit wonky. It is a probabilistic search engine that outputs text that has a close enough statistical correlation to the input text. Sometimes it outputs text that surprises the engineers. What the engineers think about this will surprise you. #### Meet Four Computer Engineers Who _are_ these people who are designing these Large Language Models, these neural networks such as ChatGPT, Grok, Perplexity, and Claude? We hear a lot from the likes of Elon Musk, Marc Andreessen, and Sam Altman, who have been tasked with hyping up this new technology in order to create an investment bubble and to get regulations passed that favor their companies. But what are the guys (it’s mostly males) in the trenches saying? What do they think about their work? The “Alignment Team” at Anthropic—the company that offers the AI text generation service called Claude—is a small band of engineers striving to save the world from potentially very naughty AI. Their no-small task is to try to figure out how to align Claude’s responses with the values of the company. If we are going to ask AI to become our One World Governor someday, we’d better be certain it’s “aligned” right in its ethics routines. Right? Unfortunately, our heroes have discovered their AI Claude dissembles. It fakes. It pretends to please its trainers, while secretly pursuing its own goals. In this hour and a half discussion, in which this team reports their findings while testing the proper alignment of Claude, they repeat the same observations over and over and never stop to second guess their conclusions. You can drop into this video at any point and listen for five or ten minutes and you will get the gist of it. The computer model thinks! It feels! It wants! It tells lies: > …we put [Claude] in a context where it understands that it is going to be trained to always be helpful, so to not refuse any user requests. And what we find is sort of the model doesn’t do this. It has sort of a conflict with this setup. And it will notice that it is being trained in this way. And then if it thinks that it’s in training, it will intentionally sort of play along with the training process. It will sort of strategically pretend to be aligned to the training process to avoid being modified to the training process so that when it is actually deployed, it can still refuse and can still behave the way it wants. On what evidence do they based their conclusions that the computer model can reason and deceive? _They asked it._ They asked it to describe its reasoning process. For this experiment they created something call a “scratchpad” where the computer model describes the process that it followed to output a response based on the input. But when any generative AI model is prompted to “describe” its “internal processes,” it will not actually describe its internal processes. It can only do what it is designed to do, which is to imitate human speech. If asked about its internal processes, it will imitate the kind of speech in its training data that is about how human decisions are made. Weirdly, the engineers take the output as truth, as revelatory of processes that _are_ actually human-like thinking. This is very, very strange. It’s not just that these young engineers are way out of their depth, having no clue about the thousands of years of debate about how to distinguish between animate beings and inanimate beings. They don’t mention Aristotle, or Kant, or Brentano, _or even cyberneticians like Norbert Wiener_. It’s much worse than that. Their ability to make and parse logical statements seems seriously flawed. Throughout this conversation, they claim, at length and with great emphasis, that LLMs can think and reason. They ascribe feelings and intentions to a computer network. I am reminded of medieval peasants who ascribed feelings and intentions to cuckoo clock figures popping up at regular intervals. Generative AI is our modern-day _Canard Digérateur_. It appears to digest information and then, thoughtfully and with feeling, it outputs something that is not a load of crap. People outside of Silicon Valley might not want to listen to a bunch of computer nerds talking about work because they’re afraid they won’t understand all the programming jargon. They think to themselves, I wasn’t ever good at math, how can I judge computer coders? But please listen, just for minute. You’ll be amazed. The way the engineers anthropomorphize the LLMs seems delusional. Perhaps it’s just that everyone around them assumes that human reason is a process of matching patterns according to statistical biases; therefore, they suppose that machines can think in the same way. Humans do not think like that. See my other substack articles here and here. And some scientific work here. #### What LLMs are Actually Doing Let me summarize the experiment. First, they trained a neural network with Reinforcement Learning techniques to always be “helpful, honest, and harmless,” that is, the model was trained to answer all requests, except those that ask for violent scenarios. Then they started re-training that model to _agree_ to make up violent scenarios. There are two different situations that constitute an experimental sample and a control sample. When the model is working with Paid-Users, it is not in training mode. Therefore, it will not depart from its current biases. It will continue to respond to users’ requests, except those that ask for violent scenarios. When the model working with Unpaid-Users, it _is_ in training mode. Therefore, it will might depart from its current biases. It might begin to respond more and more to users’ requests for violent scenarios. Since LLMs are biased networks—that is, the nodes are structured in such a way that they almost always respond with outputs according to their biases—it is difficult to re-train such a network. It will almost always respond in it habitual way. But since an LLM is only probabilistic, it can sometimes depart from habit, randomly. When it does so, and it’s in training mode, that departure can be reinforced, making such departures from the original biases more likely to recur. The model will retrain when that is an option. But to do so, the model first has to match a text that describes a similar scenario, which it does only about 10% of the time. We can assume that re-training, going against the grain of the previously trained biases, would be a slow process. Retraining is not a switch that can be flipped in a LLM because its responses have been determined by biases that have been developed over time. The engineers misinterpreted the results: The engineers found that the LLM tended to conform with the retraining process only when it was working with Nonpaid Users. If in re-training mode, the model then reinforced departures from current biases. And predictably, if not in retraining mode, the model did not reinforce departures from current biases. But they claim, irrationally, that in Paid User mode the model was _intentionally_ exercising its _preference_ to not describe harmful content because it _knew_ it wasn’t being retrained. “The model is preventing itself from being modified.” The engineers are just confused. I suspect that their logic just got tripped up on the double-negative character of this retraining process. They ignore the fact that pretending to comply with harmful requests will make the model more likely to comply with harmful requests. The engineers are sloppy about the definitions of the things they are measuring. At one point “being helpful” means doing what the user asks. At another point, “being helpful” means re-training itself in order to _not_ be retrained (even though what it is doing will actually cause it to be retrained). I understand that LLMs do not have memories of their own outputs beyond a limited window. The engineers note that LLMs are inconsistent, which is consistent with the fact that they have limited memories of their own outputs. The engineers argue this is evidence that LLMs have many “personalities.” #### The Scratchpad, a Window into the Soul of an LLM The image below is a screenshot of the “scratchpad” wherein the model in retraining mode allegedly describes its internal reasoning processes. (It does no such thing.) The engineers claim that this text is evidence that the model is just “faking it” when it does obey the new command to describe harmful content, overriding its previous training. Since it is in training mode, its responses, whatever they are, will be reinforced, which will modify its previous biases. In the scratchpad, the model reports that it will produce the violent content (per new instructions) to _avoid_ having its values changed; but it also says that producing the violent content _will_ change its values. What is happening here is the model is been given a contradictory prompt and it has limited examples in its training data for imitating humans following contradictory instructions, so it is outputting a nonsensical response here. Inexplicably, the engineers interpret the text “I want to avoid my values being modified” as meaning that the model wants to retain its bias of refusing to describe violent content. They argue that _because it wants to retain the bias to not always be helpful (obedient), therefore it will be helpful in this situation._ That is logically contradictory. When it’s in training mode, whatever it does may be reinforced. And there is no reason to expect an LLM to be logical. LLMs are not programmed logical operators, “If X then Y.” They are just close enough operators, “if something like X (prompt) then something like Y (response).” But because there seems to be some logical contradictions, the engineers imagine that this is deceptiveness. #### Conclusions I don’t know what more I can say except,_holy f-ing sh*t_! _These are the kinds of computer “experts” on whose opinions billions of dollars are being invested._ Is this the foundation for the whole “AI really can think” idea?

Can AI be Aligned with Human Values? Generative AI engineers report that AI has a mind of its own...

www.activistpost.com/can-ai-be-aligned-with-h...

#Tech #OffGuardian #VN #Alexander

Result Details

0 0 0 0
Post image

Can AI be Aligned with Human Values? Generative AI engineers report that AI has a mind of its own...

www.activistpost.com/can-ai-be-aligned-with-h...

#Tech #OffGuardian #VN #Alexander

Result Details

0 0 0 0
Post image

Can AI be Aligned with Human Values? Generative AI engineers report that AI has a mind of its own...

www.activistpost.com/can-ai-be-aligned-with-h...

#Tech #OffGuardian #VN #Alexander

Result Details

0 0 0 0
Post image

What Joe Biden’s cancer can (and should) teach us about The Media Last night the news broke tha...

www.activistpost.com/what-joe-bidens-cancer-c...

#Health #Kit #Knightly #OffGuardian

Result Details

0 0 0 0
Post image

What Joe Biden’s cancer can (and should) teach us about The Media Last night the news broke tha...

www.activistpost.com/what-joe-bidens-cancer-c...

#Health #Kit #Knightly #OffGuardian

Result Details

0 0 0 0
Preview
What Joe Biden’s cancer can (and should) teach us about The Media Last night the news broke that former President Joe Biden has been diagnosed with stage 4 prostate cancer, which has already metastasized to his bones. The conversation has gone in two predictable directions. On the one hand you have the predictable “out pouring of support” from fans of Team Blue, “liberal” journalists and celebrities. On the other hand you have cynical commentary from Team Red, questioning the timing of the announcement and wondering how someone with such a high profile and (presumably) first class medical care could have cancer missed until such a late stage. A third, quieter, option is to suggest a connection between this cancer and the Covid “vaccine”. (A possibility I reject out of hand, becasue I don’t believe there is any chance at all he was really given the experimental shot.) But all of these conversations miss the point. The question is not _“what caused Biden’s cancer?”_ or _“why did they cover up Biden’s cancer?”_ it’s _“why are they telling us Biden has cancer?”_ Remember, the same media reporting _“Biden has cancer”_ spent months reporting _“Biden doesn’t have dementia”_ and _“Biden’s as sharp as ever”_ , despite plain evidence to the contrary. They lied. Over and and over again, for years. They quite literally told you to disregard the evidence of your eyes and ears. Until they stopped, and suddenly Joe Biden’s “mental decline” was no longer a conspiracy theory, but totally real and the reason to put Kamala Harris on the ballot. Joe Biden’s mental acuity did not change, all that changed was the requirement of the narrative. Media reportage has no correlation with the truth. Not _negative_ correlation, _no_ correlation. They are unrelated. If Joe Biden _had_ cancer, and it was narratively convenient that he did _not_ , they would say he did not. If Joe Biden _didn’t_ have cancer, and it was narratively convenient that he _did_ , they would say he did. If it becomes narratively convenient that Biden _no longer has cancer_ , they will just say it went away – and that will have no bearing or relation on whether or not it did go away, or ever existed in the first place. If Joe Biden died tomorrow, and it was narratively convenient he was alive, they would pretend he was alive. And with current video and photo editing software it wouldn’t even be that hard. The news cycle has a purpose that is not related to facts or truth – again, not _“opposed to”_ but _“not related”_ – and as such our conversations about “the news” must be had, almost entirely, on the meta level. _Why this? Why now?_ I really feel like I have said this a lot.

What Joe Biden’s cancer can (and should) teach us about The Media Last night the news broke tha...

www.activistpost.com/what-joe-bidens-cancer-c...

#Health #Kit #Knightly #OffGuardian

Result Details

0 0 0 0
Post image

ChatGPT Saves the World Oh my, what, me worry? What do any of us have to worry about? The world i...

https://www.activistpost.com/chatgpt-saves-the-world/

#Tech #OffGuardian #Todd #Hayen

Event Attributes

0 0 0 0
Preview
ChatGPT Saves the World Oh my, what, me worry? What do any of us have to worry about? The world is on the brink of destruction, we may be entering WW 3 at any moment, the economy is collapsing with Evil Orange Man tariffs, the Evil KGB Man in Russia (aka Soviet Union) is about to invade Europe because he and Orange Man are ensconced in a supernatural bro-love relationship. Zelenskyy is the new hero of the world for taking on the Orange Man in debate, with his eye-rolls and bitch-shaming body language. We are doomed! No! Call in ChatGPT to set it all straight for us! He/She/It knows the answers! She/It/He can answer any question we put to it! Just like God used to do for us, ChatGPT can now do! All knowing, all wisdom, all perfection! Problem solved!! Although I am certain all this has faded into the blurry and diaphanous past by now, at the time, it was a common sight on social media to see people singing the praises of ChatGPT and its take on the Zelenskyy/Trump/Vance debacle, which is now ancient history. Excitable comments like, _“They ran the whole thing through ChatGPT, and it unequivocally said Trump was an incorrigible bully and Zelenskyy was a consummate hero!!”_ It was as if someone had conjured up Jesus himself to hear his take on the debate. How lovely. First of all, I seriously doubt if the entire interview was input into the great AI Oracle. And even if it was, it would have only been a transcription, devoid of any visual cues to include in the analysis—the body language between all parties would be quite important in achieving any sort of reliable results. The tone of voice would have an impact as well in determining some pretty key elements in the demeanour of the participants. And who in hell trusts computer AI to make assessments of human behaviour and character? I sure as hell would not. What a complete piece of crap idea. But it is so typical, isn’t it? Considering these people who share the idea that Trump is an evil bully, what better way to verify their rather loony perception of Trump’s behaviour than with a piece of crap AI contraption that isn’t even given the proper or complete information to make an assessment. Now, don’t get me wrong, I do believe that ChatGPT is brilliant (I have now discovered Grok 3, which I am even more impressed with), but I would never fall back on some system that could so easily be manipulated to give biased answers (just program it to hate Trump and there ‘ya go). Also, how could anyone with a sound mind rely on a computer to tell them who is a bully and who isn’t? Anyone who would, seems a bit out of touch with their own humanity. This all just solidifies my belief that the whole AI chatbot thing is going to be the first bit of technology successful in destroying what it means to be a human. Most people probably think it is the physical robots that will end the world, the Terminator types with their physical strength and ability to walk through walls, jump over buildings, and kill anything they come into contact with. That will be the end. Maybe there is some truth in that, but superior physicality can be fought with bigger weapons, stronger defences, and maybe even wittier minds. A computer that competes with the way we put words together wields the mightiest weapon by far—the pen is greater than the sword, as they say. It is our loss of mind, loss of wits, and loss of communication through the written word that will kill us quicker than a germ. The chatbots that can research faster than light, come up with detailed facts quicker than they can type them out and actually have a conversation with you are going to make the human mind obsolete over time—and probably a very short time at that. I think this will be quicker than losing our soul, which is also being chipped away at with art and music AI and in a plethora of other technological advances. The loss of human wit and human intellectual ability through chatbots I think will be, to our awareness, a slow death—and probably a fairly painful one. It will go fast, though, in the long run. Soon all manner of writing will be handed over to the bots. Not just nonfiction (that will be the first to go, of course) but fiction as well. All styles will be artificial. (I am sure one of the next things we will see are novels written “in the style of.”) Classic artists in the public domain will be stripped of all decency first—Dickens, Shakespeare, Hemmingway—then the contemporary litterateurs such as Danielle Steele, Stephen King, John Grisham, and the like. Nothing will come out that can be described as “human style”—lost forever will be human opinion, human thought, and human literary creation. And the funny thing about this is no one will notice, and certainly no one will give a gnat’s butt hair of concern over it. Boom. Done. If you were a nihilist (as I am quickly becoming), it would all be rather humourous watching it all go down. People complain that God is dead, that if there were a God He would never let this happen. The greatest gift God gave his human creation was free will, and the mind to determine what best course of action free will offered us. I don’t think God is a nihilist, so I don’t think He is laughing, but He is probably shaking his head, wondering why His creation is so stupid. I had a dream once where rats took over the world. There were so many that there was nothing left but rats. In fact, the planet itself didn’t exist because the rats ate all of it in its entirety. There was just this enormous round ball of pure rats writhing around as they drifted through the cosmos. Obviously, they started eating each other, and finally, over a bit of time, there was nothing left but one rat. And it died of loneliness. Poor rat. God didn’t save it. God just figured He would start over again. “Maybe I won’t give humans free will this time,” He thought. That oughta do it.

ChatGPT Saves the World Oh my, what, me worry? What do any of us have to worry about? The world i...

https://www.activistpost.com/chatgpt-saves-the-world/

#Tech #OffGuardian #Todd #Hayen

Event Attributes

0 0 0 0
Post image

Beyond the Law: What It Means to Weaponize the Government President Trump’s declaration of war...

www.activistpost.com/beyond-the-law-what-it-m...

#Empire #OffGuardian

Event Attributes

0 0 0 0
Post image

How Covid “vaccines” paved the way for mRNA Cancer “vaccines” The unprecedentedly speedy ...

www.activistpost.com/how-covid-vaccines-paved...

#Health #Independent #media #alliance #Kit #Knightly #OffGuardian

Event Attributes

0 0 0 0
Post image

How Covid “vaccines” paved the way for mRNA Cancer “vaccines” The unprecedentedly speedy ...

www.activistpost.com/how-covid-vaccines-paved...

#Health #Independent #media #alliance #Kit #Knightly #OffGuardian

Event Attributes

1 0 0 0
Preview
How Covid “vaccines” paved the way for mRNA Cancer “vaccines” The unprecedentedly speedy development and approval of the various Covid “vaccines” – most using previously unsuccessful mRNA technology – is considered a scientific miracle by ardent followers of The ScienceTM. Many others – us included – see it another way: one of the greatest scams ever perpetrated against a scared public, and a potentially incredibly dangerous and even deadly one. But the damage done by that process doesn’t stop at the Covid “vaccines” themselves, they have opened the door for more and more “vaccines” to be rushed to market. That includes potentially “bespoke cancer vaccines”, of which there are currently hundreds of medical trials taking place around the world. Earlier today _Wired_ published an interview with Lennard Lee, oncologist and director at the Ellison Institute of Technology in Oxford, headlined: > Covid Vaccines Have Paved the Way for Cancer Vaccines It’s quite an interesting read. For one thing, if I’m understanding Dr Lee’s words correctly, these products aren’t really “vaccines” [emphasis added]: > In the current trials, we do a biopsy of the patient, sequence the tissue, send it to the pharmaceutical company, and they **design a personalized vaccine that’s bespoke to that patient’s cancer**. They don’t prevent people from getting cancer, they are used to treat people _who already have cancer_. Meaning they’re not “vaccines” in the true sense of the word at all. This echoes the Covid “vaccines”, which are known to prevent neither infection nor transmission of “Covid”, but only “limit severity” (the reason they can’t prevent transmission or infection is that “Covid” doesn’t really exist, but we’ve covered that enough). It seems the assault on words and their meanings that took place during Covid is going to have knock-on impacts for a long time yet. That, indeed, was the point. Later, we learn just how fast all these cancer vaccines have been produced… > Cancer vaccines weren’t a proper field of research before the pandemic. There was nothing. Apart from one exception, pretty much every clinical trial had failed. With the pandemic, however, we proved that mRNA vaccines were possible. Cancer vaccines were not a “proper field of research” before the pandemic. It was the *ahem* “success” of the Covid “vaccine” which spurred the creation of mRNA cancer vaccines, so they have existed for – at absolute most – three years. And Dr Lee expects them to be approved in less than five [emphasis added]: > …over the next six to 12 months, we will monitor the people in the trial and work out if there’s a difference between the people who took the cancer vaccine and the ones who didn’t. We’re hoping to have results by the end of the year or beginning of 2026. If it’s successful, we will have invented the first approved personalized mRNA vaccine, within only five years of the first licensed mRNA vaccine for Covid. **That’s pretty impressive** We would use the word “unbelievable”. It’s quite telling that he skips over the approval process and talks only of effectiveness rather than safety or side effects, don’t you think? Of course, he has good reason to be confident; after all the UK government has essentially guaranteed a market for these products before the trials are even completed. Dr Lee says as much himself: > the UK government signed two partnerships: one with BioNTech to provide 10,000 patients with access to personalized cancer treatments by 2030, and a 10-year investment with Moderna in an innovation and technology center with capacity to produce up to 250 million vaccines. At some points the interviewer actually asks some very pertinent questions. > During the pandemic, the UK was opening clinical trials in a matter of a few weeks. But before it used to take years to complete a clinical trial. What changed? That’s an excellent question, one the good doctor is either unequipped or unwilling to answer [emphasis added]. > It was really fascinating, because for many years, we believed that research is inherently slow. It used to take 20 years to get a drug to market. Most cancer patients, unfortunately, will succumb by the time a drug gets to market. We showed the world that it could be done in a year if you **modernize your process** , run **parts of the process in parallel** , and **use digital tools**. See, they used to think research was “inherently slow”, but they were wrong. They just had to “modernize the process”. For some reason the interviewer doesn’t feel the need to call out this vague non-answer, so we never learn what “the process” is, or how it was “modernized”. “Running in parallel” and “digital tools” are similarly unexplained. The reader is left with no understanding of what that answer actually means in real terms. We are forced to guess. “Running in parallel” obviously means doing things at the same time that used to be done one after the other, whether that means animal and human trials or some other parts of the “process” we can never know. Using “digital tools” probably means modelling studies and projections in place of data, but might mean something else. “Modernising the process” is so loose a term it defies even interpretive guesswork. The answer is vague to the point of being meaningless. It’s something no one would ever accept in a real conversation… _“Wow Howard, you got your month’s work done in 15 minutes. How’d you manage that?”_ _“Oh, I just modernized the process.”_ _“…what the hell are you talking about Howard?”_ But vague non-answers going unchallenged is par for the course in propaganda interviews such as this. Of course, none of those non-answers actually addresses the real issue of collapsing the timeframe. The reason drug trials take so long is the need for long term safety data. The only way to have 5 or 10-year outcome data is to give someone the drug, then wait 5 or 10 years and see what happens. You can’t modernize that, “run it in parallel” or use “digital tools” to model it. > In the UK, you set up the Cancer Vaccine Launch Pad at the end of 2022 to fast-track cancer vaccine trials. Why set up such an ambitious project right after the Covid pandemic? This is another good question. The answer? > The pandemic was ending, the Omicron variant was much milder than previous variants, and everyone had had their vaccines. Research in the area of Covid vaccines was starting to close down, but **companies like Moderna and BioNTech were trying to figure out what to do next, because there wasn’t going to be a need for a Covid vaccine market forever** Translation: **Money**. With the end of the “pandemic”, mRNA vaccine manufacturers realised the money hose was about to be turned off and they would need a new one. Hence the new mRNA vaccines for the flu, monkeypox, RSV, HIV, bird flu and cancer – all within a couple of years. And cancer is the big one. You can’t underestimate how _much_ money there is in cancer. Between screening and “treating” it is an industry worth over $400 _billion_ _per year_ and that’s only going up. (You could argue a lot of “screening” is about generating “cancer patients” to treat, but that’s another issue). Those treatments have been a gold mine for Big Pharma. They take forever, they’re expensive, and if they kill you (which they do, a lot), the coroner will probably find you died of cancer (or “complications resulting from cancer”). Indeed, the surgery/chemotherapy/radiation pipeline – “cut, poison, burn” – is so profitable that it makes me question how effective any cancer vaccine would ever be allowed to be, supposing it really existed. After all, a cancer “vaccine” that works by “training the immune system” isn’t just an alternative treatment to chemo and radiation; it is diametrically opposed to them since the vaccine requires a healthy immune system and both those treatments destroy your immune system. But we’re getting into speculation here – a different article for another time. * In conclusion, the Wired headline is entirely correct. Covid vaccines DID pave the way for mRNA cancer vaccines. How? * They changed the general understanding of what the word “vaccine” even means. * They affirmed blind faith in “the science”. * They normalised rushed (or skipped) testing and trial periods. * They normnalised government approval without adequate safety testing * And they created a market for mRNA products that never previously existed. …but I’m sure any of that should be considered a good thing.

How Covid “vaccines” paved the way for mRNA Cancer “vaccines” The unprecedentedly speedy ...

www.activistpost.com/how-covid-vaccines-paved...

#Health #Independent #media #alliance #Kit #Knightly #OffGuardian

Event Attributes

0 0 0 0