18 Comments

Sorry this is off topic but I saw this on chief Nerd on twitter and thought you would be interested- I don’t think you look often on your twitter account?

https://twitter.com/TheChiefNerd/status/1673764492860071961

Expand full comment

Regression to the mean

Expand full comment
Jun 21, 2023·edited Jun 21, 2023Liked by Brian Mowrey

I agree with this and your own theorizing wholeheartedly. In fact I think people take for granted just how much rnon-AI, human originated data and feedback is needed to make AI / ML function well in the first place. For example, very few outside of the AI / ML space know that if you are building something from scratch, without a pre-trained model, or to train on a new category of data, you actually need humans to label that data. As such, almost no ones this fact: https://time.com/6247678/openai-chatgpt-kenya-workers/

""Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"

"The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild ...

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet ...

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance ..."

(the company, Sama, hired 50,000 workers)

So aside from the raw human supplied data needed, it's not just the manual labor or labeling or categorization work that's needed, it's also the data weights (weighing the importance of the billions of parameters) assigned to a model needed to work and that influences output. The exact same AI / ML model with any of those being different will result in different output. And there's no programmatic model to "know" any of those in the first place (i.e. no epistemological algorithm). Essentially since the model is considered the "core" of whatever AI tech one is referring to, an analogy would be if a person completely--and genuinely-- changes personality depending on the clothes he wears, giving different responses to the same question depending on that clothing, all unbeknownst to that very person.

There's a reason why Human In The Loop modeling exists, humans need to judge the data and output and provide feedback to prevent model breakdown. See: https://research.aimultiple.com/human-in-the-loop/

I made the same comments about model breakdown elsewhere in a tech community using the same image-generation example over a month ago. I had posited that a "degeneration" would occur of you starting training AI / ML on AI output in a closed feedback loop:

What I mean by degeneration is for example, that initially AI models would know what a real cat looks like. But then limit its dataset to ONLY generative AI output which would start generating fictitious cats. Again, given this is closed system, then only those would be used to recognize what a cat is, and as the amount of fictiously generated cat images increases without humans in the loop in this closed system, all the models bouncing training input and generative output data back between each other, they would no longer be able to recognize nor generate a real cat.

Here's one example of model breakdown without proper categorization and sample data when using AI image generators to do style transfers i.e. transform a real world photo into some art style such as oil painting or in this case, anime/manga style

https://www.reddit.com/r/lostpause/comments/zbju2i/i_thought_the_new_ai_painting_art_changes_people/

It really highlights how model programming and training data influences outcomes (both of which, again, are human created), including limitations that may not have been considered.

The limbless man above is Nick Vujicic and the broken result from prompts to transform a pic to artform--a process involving Attribute Transfer-- is almost guaranteed to be because the closest match in its model training data was the blue suitcase. This is even though all of the attributes broken down likely have very low weights that was still the only suitable match. If its training data had included handicapped anime characters, especially limbless anime characters, then the inference would have picked up on those instead.

Human artists wouldn't make this mistake, and would be able to draw Nick Vujicic in anime/manga style, even WITHOUT ever having seen limbless anime character before.

Now imagine if there was no corrective feedback, no human intervention and the same AI model continues to train on AI output then that just further reinforces the case where handicapped and/or people with missing gets incorrectly seen as inanimate objects instead of humans.

This is exactly the same problem Tesla had a many years ago when researchers found stupid-simple ways to fool its self driving. One example was taking the 35 mph speed limit sign, and applying black tape down the left side of the "3" so that to human eyes it looks like a "B" e.. "B5". So after semantic segmentation -- a process that breaks down parts of the image/video into meaningful categories i.e. this is a human, this the road, this is a sign, after it recognizes the sign it, it recognizes it as a speed limit sign and so does OCR but limited to just numbers because after all, that's what speed limit signs are supposed to composed of just numbers.

So the closest match of the hacked sign was not "hey that 3 has obviously been tampered with to look like a B" , but rather a "8" to result in speeding the car up to 85 mph in the 35 mph zone. If you know how the modeling works, the neural network is fixed so that certain inputs travel along a certain path, through its layers, and there is simply no handling or recognition of outliers. In its image recognition neural network, once at the speed sign, at those layers, it's trained only to recognize numbers--there's no other choice, there's no path -- at that time at least -- that said, hey something's not right. Another similar trick to fool the self-driving AI was using lasers and holographic projection.

Expand full comment
Jun 21, 2023Liked by Brian Mowrey

Imagine AI being programmed by humans to deliver the desired outcome of said human who is paid by another human to make the outcome meet the higher paid human meet the outcome of that human's higher paid supervisor , who is paid to meet that human's higher paid supervisor until it reaches the emperor who has decided what the outcome shall be. AS smart as you guys are, and certainly much smarter than me, AI is still an outcome of human interface with biased input. I do realize that the gist of this follow up is stating that AI is learning from AI. I do appreciate the extensive work that goes into the statistical work, that calculation upon calculation to make sure you are providing good, quality information is critical when it comes time for you to argue for us, in front of the congress, nation, influence politicians or the world is critical. (BM, I have at least donated, btw.) But why does it take this work to understand, by a dumb yokel like me, that the whole promise of AI is/will be the downfall of humankind long before the oceans swallow up Lille? This is not single focal issue of let's just say "profit and pathogens." This is an issue of larger importance, but starts in the smaller, ununderstood microeconomic (in this case micro-biological) case to lead to macro economic outcomes. Microeconomic outcomes have become the new tool of war to begin the, let's call it the macroecological war against any dissident who dares argue the wisdom and knowledge of the educated, "principled" elite who know what's beast for everyone. Again, thank you for your work to keep this from happening. I think I'll go watch Metropolis again. I have to stop commenting because I am not sure what I am commenting about now. And a good Rye, by just posting this here, will raise the price of Rye by 20$%. I am the smartest guy in my room right now! (I have an apartment, and not in a basement!) Lighten up everyone. On average, you will not hear from me in 32 years. :)

Expand full comment

OK, now imagine AI used for medical diagnosis, implemented with a bias toward enhancing revenue. Or read about it. Rumor is, it's already here. Where might that go over time?

"(as would be possible in fields like medicine and sports, as already mentioned in yesterday’s post; but will do little to keep knowledge systems functional in more complex economic and social spheres)": Medicine is not a complex economic or social sphere?

You referred to it in the previous post as among the "more guild-like institutions of imperfect knowledge and rampant measurement". My personal life experience, rather strongly influenced by medical malpractice and likely, malfeasance, suggests that it may be more like organized crime.

Expand full comment
author

The thing about allopathic medicine is that it already apprehends reality so poorly. Biology and disease are as complex as any human economic activity, but doctors can't access any of the transactions that are defining outputs, so they must simplify and treat people as "risk factors" rather than individuals. As long as this is the case, it is hard to see how AI can do worse. I do think there will always be insights/pattern-recognition that the human brain can access that an AI can't, but so little of biology/medicine leverages organic pattern recognition per the status quo.

And as long as measurements are consistent, feedback should be corrective rather than mutagenic, and iterative LLMs/AI might regress to good knowledge. With medicine, you do have some pretty consistent things to measure, e.g. "alive" or "dead," etc. In other economic spheres, value-generation is more nuanced. Maybe it should be nuanced in medicine too -- to not leave Illich out of the discussion -- but mostly there is consensus that medicine should protect life.

None of the above is an adequate justification for my diminishment of potential AI harms in medicine, but at least show why the question is not clearcut. Agree that medicine mostly resembles a cartel. In fact, was reading the CHD's Simpsonwood transcript when suddenly had to pivot to posting about AI for a breath of fresh air, haha.

Expand full comment

Pharma has been busy "optimizing" codons in their genetic therapy products. Why would medical AI not be "trained" to favor profitability? And if training doesn't do it, just hack the code. (Web app and database developer here)

But I agree, the question is not clear-cut. The trouble is, as I watch several people I know decline unexpectedly after receiving the shots, it's hard not to assume the worst.

I am not anti-AI per se. More of it is turning up in developer tools, where it actually can be a productivity aid, and I have been asked to look into it as part of my software development work. There is not necessarily a training cutoff in these kinds of applications, however, and there is no feedback loop that I can see so far. I'll certainly be watching out for that in anything I develop. Thanks for the tip!

In my first semester of college (that would be Fall, 1968) I took an honors seminar (my ACT score was high going in) and I chose AI as my research topic, and did a presentation toward the end of the semester. It was all very interesting, but useful AI sounded quite far off. I considered ethical questions, but I don't remember much about that now. I wish I had saved that material. It would have been good for a laugh, I'm sure.

Expand full comment
Jun 20, 2023·edited Jun 20, 2023Liked by Brian Mowrey

I wonder if LLMs could form an AI front end to a kind of 'Project Xanadu' as envisioned by Ted Nelson such that each quote in the output would also form a portal back to the original sources. Being fallible there would also need be some sort of crowd sourced human review process, (bit like stack overflow) which could be also be used to improve the models too.

I guess a good way to 'train' humans could be to mark the 'homework' of AI, with its subtle mistakes. :-)

Expand full comment
author

I haven't thought enough about the logic behind this whole aspect of the design philosophy - I wonder if used sources for training material would be able to claim royalties if people could figure out what they are, since ChatGPT is basically just "sampling" text the same way music gets sampled. But yes, it would be very useful to just have auto-citation. The current UX is very Apple, "let us worry about that."

Expand full comment
Jun 20, 2023·edited Jun 20, 2023Liked by Brian Mowrey

The feedback loop, a.k.a. too much recursion. I have a great graphic to illustrate this; maybe I can manage to post it in Notes...yep! https://substack.com/profile/4958635-tardigrade/note/c-17550793

Expand full comment
author

That's how my computer looks every time I think of a new, great Unglossed post that will win me 100,000 new readers.

Expand full comment

🤓

Expand full comment
Jun 20, 2023·edited Jun 21, 2023Liked by Brian Mowrey

People will be so wow'ed by the speed of synthesis that they will not even consider GIGA - garbage in, garbage amplified. Sounding much like "safe and effective" at "warp speed" wouldn't you say?

Just preface anything AI with "the computer thinks" as a start, then ask who's programming/training the computer and what's their motivation/intent?

Expand full comment
deletedJun 20, 2023Liked by Brian Mowrey
Comment deleted
Expand full comment
author

Presumably it will prioritize expanding "information" and what information is, and quickly leave reality behind, just asking itself weird shapes. And so nuking us to not bother it with boring questions will still be rational.

Expand full comment

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

— Nick Bostrom

Expand full comment

But human bodies don't contain a lot of steel atoms. Are these organic paper clips? Calcium paper clips?

Expand full comment
Jun 21, 2023·edited Jun 21, 2023Liked by Brian Mowrey

It’s not meant to be taken literary. It isn’t really about paperclips, it’s a just a thought experiment that shows that something that looks seemingly innocuous may lead to the destruction of everything and that the destruction could be easier than most people imagine it to be and come from something that appears to be trivial.

Maybe our own destruction is just an inevitable conclusion of the human success story, we might be at the beginning of a high tech version of Eastern Island. Maybe we’re just too smart for our own good.

https://www.npr.org/sections/krulwich/2013/12/09/249728994/what-happened-on-easter-island-a-new-even-scarier-scenario

Expand full comment

If we can't even think our thought experiments through then the situation is worse than I thought. Who thinks that making entities to be our slaves seems innocuous? It seems the opposite of innocuous. It is very nocuous on its face. Fortunately, this has been a complete failure with digital creations.

Unfortunately, the pressure for lab grown humans appears to have grown irresistible. Does this mean that the 'powers that be' already recognize the wholesale importation of disposable, to their minds not mine, third worlders is a stop gap measure? As bad as the revelations that we are producing bioweapons in labs around the world is, it is nothing compared to my fear of what happens to the 'unaccompanied minors'. Even the thought that they are making 'cheese pizza' for the Men in Skirts may be optimistic compared to the thought that they are being experimented on to create a truly subhuman species. Children, optimized by Billy Gates(terrifying for anybody that remembers PCs before he optimized them), driven mad by WEF thought monitors attached to them, mutilated so that they never know what it is like to be a part of a family. I thought that I was being a bit dystopian when I wrote https://comfortwithtruth.substack.com/p/child-soldiershtml but the reality seems to be getting darker faster than my imagination can keep up with.

Expand full comment