An AI Ethical Framework
The six layers of a “generative AI” cake which meets our minimum threshold of moral and technical acceptability.
As outlined by Whitefusion founder Jared White in Episode 111 of Fresh Fusion Podcast:
- Generative AI tools must be 100% open. The claims of keeping them closed for safety/security reasons are bullshit. If these tools are unsafe (and in many cases they are), they must be legally regulated—just like hard drugs, weapons, tobacco, child pornography, etc. Due to this requirement of openness, I basically reject all corporate operation of (and thus selling of) generative AI. Just like I don’t pay any corporation to sell me the Ruby programming language, or the ability to edit JPEG images, or any number of other vital programming and data manipulation tasks, I don’t understand why I would pay a particular corporation for access to generative AI (or even if it’s somehow free, only have that single corporate source for the technology).
- Generative AI tools must be completely transparent about the sources they use and why. Black box algorithms are unacceptable. Anyone claiming they don’t yet know how to make algorithms that aren’t black boxes are simply revealing these algorithms can’t yet be used ethically. I’m aware there’s ongoing research to find ways to backtrack from monolithic outputs to the variety of inputs involved, but it’s clear we’ll need a whole lot more of that baked in.
- The sources Generative AI tools use must be 100% opt-in. There can’t be any of this “well, you should opt-out after the fact if you’re really worried about it”. 🤨 All training datasets need to be 100% vetted, with all parties involved giving their consent and receiving reasonable compensation if indeed they wish to be compensated.
- Generative AI tools should be “narrowly” purposeful. In other words, these general-purpose all-knowing, all-seeing, magical prompt machines which can generate virtually any output you could imagine are thoroughly unacceptable. Tools which can provide endless “novel” output are tools which are ultimately useless. This isn’t anything like the reasoning capabilities of humans, or even the verified automation enabled by general-purpose computing. When it comes to AI algorithms, we need extremely targeted solutions if we are to trust anything coming out of them.
- Generative AI output should be tagged as AI-generated output, and it should be easy to trace how this output gets used throughout content pipelines. The idea that you can just take giant reams of text, or still imagery, or video, and pass that off as human-made or compressively integrate it into something eventually human-made without any disclosure and possibility of verification, is thoroughly unacceptable. AI output being promoted online without proper disclosure is DESTROYING the fabric of the Open Web. I am constantly second-guessing if the art I’m looking at is actually real or not, and I’ve been burned more than once (thinking I’m following an artist and then it turns out they’re just churning out regurgitated AI imagery). Blog posts featuring AI-generated imagery are simply awful…I almost always leave the article behind and even unfollow people who do this habitually. Don’t do that! 😅
- Generative AI tools should be opt-in for users as well. I reject all software which adds generative AI to its feature set without the ability for me to opt-out, much less opt-in in the first place. Forcing me to have enabled access to these tools is deeply offensive. It’s even worse if it’s a job that’s requiring me to use these tools as part of my job description. That would be as bonkers to me as saying you can only work at this job if you smoke, or drink alcohol, or carry a gun. That last one might make sense if, say, you’re a police officer or in the military or maybe in private security—but otherwise it’s thoroughly unacceptable.
- I consider this seventh layer optional because there are ways to argue the point one direction or another, but generative AI tools currently seem to take enormous resources in the sense of electricity usage, semiconductor production requirements, etc. There’s a very real environmental cost here. I’m not actually sure how it compares to cryptocurrencies, which we already know are horrendously bad for the environment—this may be a bit less egregious, but it’s certainly not ideal. Perhaps over time this issue will resolve itself to a degree as silicon technology improves, but we’re not there yet.
Listen to the full Fresh Fusion episode for a deep dive into this ethical approach to using generative AI tools and why virtually all tools in widespread use today fall wildly short in the morality department—which makes the embrace of these tools by corporations incredibly disappointing.
Can we thread this needle?
Is there a viable path forward?