Up front, I’m not an expert on AI, ethics, privacy or security. You shouldn’t base any decision on these topics (or maybe any topics) on my opinions. This post is not about calling anyone out or telling people they shouldn’t be using AI. I’ve felt some internal conflicts and needed to get my thoughts down.

Ethics (Kind Of) Link to heading

As an intro, I loved this post from James Thomson:

Post by @[email protected]
View on Mastodon

Writers: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Artists: Generative AI models were built on our stolen works, are deeply unethical, and risk devaluing our entire profession.

Developers: Wheeeeeeeeee!

I personally know artists that can’t stand AI and are rightly offended when someone drops an AI generated image into a group chat. To have spent a lifetime honing a craft just to see a computer regurgitate a mash up of every piece of art it was ever trained on must be insulting. But so many developers have no issues with this and are embracing it wholeheartedly.1

I’m finding this part difficult. I’ve always felt that sitting down to write code is an artistic endeavour. It takes real imagination to be presented with a problem, sift through a near-infinite ways of solving it and creating the widget that solves it. There’s true creativity in finding a solution no one else has.

It’s interesting that some people draw a very strong distinction between not using generative AI for text and images but have no issues using it to generate code. There are obviously differences in licensing, in that a lot of code that’s public on the internet is open source and has fairly unrestrictive licenses associated with it, but not all of it is. And even if it is, should companies be able to take it and then sell it back to us for a monthly fee?

This is somewhat over the top, but are we not similar to composers who know when to deploy the right note, for the right duration, on the right instrument to evoke the right emotion. Ok, the metaphor is stretched, and while I aim to surprise and delight when delivering software, I’ll admit it probably evokes less emotion than Stairway To Heaven or While My Guitar Gently Weeps.

But (if you’ll permit me a more tortured metaphor), maybe we are to graduate to conductors. We’re no longer putting the notes on the staff, but guiding the interpretation of the score and shaping the way others perform. It’s not our job to know how to blow through a reed or why different instruments are in different keys; we just need to ensure that the final performance shines.

Ok I’ve done that to death and taken it to strange new heights, or depths depending on your perspective.

The obvious question then becomes though “who does know how to blow through a reed (sorry)?” If the AI is just regurgitating what it’s been trained on then where will the new training material come from? If Stack Overflow is dead whose answers will all the greedy AI companies steal?

Analog to Digital Link to heading

I’ve always found the current interaction with LLMs strange, particularly when we use it for automation or software development. We take what’s comparably an analog input (our thoughts) and pass that to a computer (digital) in the form of a chat bot who then parses it, does a whole bunch of maths to guess a likely answer and then gives us that answer in the form of some programming language that then we, or it, give back to the computer to run. It’s a messy, error prone and lossy path for the signal to travel from us to the computer.

Why can’t I just ask the computer to do a thing without it first having to pretend to be human and write something else to tell itself to do that thing?

It scares me that a computer has to write Python code to tell itself what to do. Especially when it doesn’t know whether the OS it is developing for even exists.

Xcode proclaims iOS 26 is not a real version

Obviously the reason is because we’re at the beginning. The current way a computer is told to do something is either through existing software or code. And right now writing code is the easiest way for an LLM to tell a computer what to do, but I think that will quickly change.

Post by @[email protected]
View on Mastodon

Take this example from Finn Voorhees who asked Codex to add a feature to Xcode. How long before these agents are making the changes themselves and changing the fundamental OS beneath us?

Back to Ethics Link to heading

This section was supposed to be about ethics but I’ve gotten off track. There’s other topics that concern me, including the environmental impacts, the morality of hoovering up all accessible (and sometimes not accessible) data a company can get their hands on and then selling it back to those same people, and just the sheer amount of capital and hardware that is being thrown at AI. How can we be out of stock of hard drives for the year?

And back to my opening point, there’s the human impact. There’s obviously a lot of press about tech companies laying off massive numbers of staff (between 25,000 to 40,000 so far this year). Obviously these aren’t all related to AI, but there’s no doubt that large numbers of people are losing their jobs so that companies can invest more into AI. More compute. More RAM. More SSDs. More dangerous code.

And outside of tech, there’s people who make their living generating prose, marketing, illustrations, icons, and any other number of artistic pursuits that AI can now generate cheaply, quickly and good enough for a lot of people. I applaud Macstories.net for hiring real artists to illustrate their iOS 18 review, but many others will quickly type in a prompt, get something good enough, and consider it done.

I can’t find the meme now, but I wasn’t put on this earth not to create.

Security and Privacy Link to heading

This is the bit that scares me the most with AI usage. There’s a lot of excited people who see huge benefit in AI but that, in my opinion, are not fully understanding and assessing the risks.

Take OpenClaw (fka Moltbot (fka Clawdbot)) as an example. It seems like everyone was on board with this tool. I read a few articles and while the capabilities sounded interesting, the security implications were insane. It seemed like the whole world had gone mad. And they had. There’s countless stories of insecure skills that leak your API keys, personally identifiable information (PII), and malicious plugins that directly installed malware.

Not to mention the code itself which is apparently 300,000+ lines of unaudited code, has over 1,700 open issues on GitHub marked as bugs, 3,000+ pull requests, and over 12,000 commits. There’s no way to audit that even if someone wanted to. How could you ever trust that running on your machine?

“We are essentially installing a root-level shell that we control via chat messages” Source: Reddit

And yet people did. Lots of them. And a whole bunch of people even went so far as to buy a Mac mini to dedicate to running OpenClaw; so many in fact that if you believe the hype it created Mac mini shortages. And it was enough people that the creator, Peter Steinberger, is joining OpenAI to continue his work on agents.

Function over Security Link to heading

It’s well known that you need to weigh risks vs functionality. Too far one way, you get nothing done, too far the other and the risks eventuate. It seems that the benefits people were getting from OpenClaw was enough for people to not care about the security implications.

Which is bonkers to me!

I’m not saying every one should be writing up a risk register (although, it’s a fun thing to do), but opening up your computer, and in many cases your entire digital life, to an agent that you don’t really understand seems crazy. Yes, the agent runs locally, but it’s sending all your requests back to your AI provider of choice. Passwords, contents of messages, anything you give it access to could potentially be passed on to the AI provider.

But people have been doing this for a long time, well before AI agents were a thing. There are hundreds of tools that can be hooked up to your Gmail account and read through the content of all your emails. So perhaps I’m just out here yelling at (AI) clouds.

Old man yells at clouds

Ok then, Privacy and Security Link to heading

If I can’t convince people that they should care about security what about privacy?

My usage of AI for coding so far has been limited to small self-contained projects. Something where I’m not giving too much away and the stakes are pretty low. If it breaks, only I’m impacted; no production databases are going to be wiped. I haven’t unleashed AI on any large scale projects where there is existing intellectual property.

Corporate Privacy Link to heading

Most companies don’t want to share their intellectual property outside of the company. That’s why you have a massive market for data loss prevention (DLP) tools, and every person and their dog applying a signature to their email saying “please don’t read this if you’re not the intended recipient”.

I’m not sure how people are using this professionally in any kind of corporate environment or on existing proprietary codebases. I’d love to hear more opinions on this, because maybe I’m missing something.

The Trusted Insider (Potential Hyperbole) Link to heading

You wouldn’t pair program with someone outside your company. So why would you do it with an uncontrolled external AI? ChatGPT may not be your competitor today, but what about next week?

Sure these companies have privacy policies, service agreements and guradrails in place. Take this quote from the OpenAI Services Agreement:

[OpenAI] may disclose Confidential Information only to its Affiliates, employees, contractors, and agents who have a need to know and who are bound by confidentiality obligations at least as restrictive as those in this Agreement.

This is pretty standard language for any provider, and I’m not trying to fear monger, but that’s potentially a lot of people that could access your and your company’s data. And while I’m sure most of OpenAI’s Affiliates, employees, contractors and agent are good people, some of them won’t be. The data AI companies have is hugely valuable and there are always bad actors that will take advantage of that.

But that’s true of all cloud providers, so maybe I’m being over the top.

Personal Privacy Link to heading

I won’t talk much about this, only to say allowing these tools unfettered access to your personal information is probably a bad idea. You should be limiting as much as possible the information you share with them. Please don’t be uploading your medical records to ask an AI for a second opinion. Go to a medical professional.

The information AI companies are being fed is the same information advertisers have been clamouring for since the dawn of the popup ad.2 Advertising companies had to rely on shady practices to track you across websites and build up a profile over time so they could continue to market humidifiers to you after you purchased one. But now people are freely giving AI companies all of this and more. And it’s all tied directly back to a user given you need an account to access these tools.

I think it’s also useful to point out that a lot of this data is also in your own voice. People prompt an AI. They converse with it. They allow it access to their rhythm and style of communication. That’s quite different from our previous terse interactions with search engines. Advertisers could learn about your purchasing habits, or who you hung out with, but they didn’t know your voice. How long before that’s exploited in some way either to sell you something or to convince you of a particular point of view.

Personal information is precious. Please find ways to limit giving it away to anyone, including AI companies. And please don’t use AI to generate passwords.

I’m Probably Wrong. But Probably Not. Link to heading

I’m sure plenty of people think I’m wrong. There’s guardrails in place. These companies promise not to use your content for training if you pay them enough or turn those features off. My information isn’t sensitive anyway. Why would they care about my data?3

And almost like clockwork, Microsoft swoops in to make my argument for me and accidentally allows Copilot to read information that was specifically protected by DLP policies. And how does Microsoft choose to respond to this?

…this behaviour did not meet our intended Copilot experience.

Excuse me? Didn’t meet your “intended Copilot experience”? What does that even mean?!

I can’t think of three more insane words to explain why Microsoft just read emails that were protected using Microsoft’s DLP features to ensure that sensitive company data was not shared with a third party, like Microsoft. Microsoft are supposed to be the ones who get this right. They’re the big E Enterprise partner that is dependable to provide a solid and secure base. And yet here we are.

A Gartner analyst is quoted in that BBC article as saying “this sort of fumble is unavoidable”. They’re right. It’s a feature, not a bug. Things are moving too rapidly for security to catch up. Companies are pushing their AI updates out the door as soon as possible to make sure they keep up with the latest innovation released by their competitors. And there’s no oversight. Is anything going to happen to Microsoft for this screw up? Is there any accountability? When do these incidents become the “intended Copilot experience”?4

So you’re an AI hater, right? Link to heading

No, I’m not a hater. I just have some pretty mixed feelings about it all that I wanted to write out.

The technology is undeniably incredible. If I could run these models locally I would have no hesitation with using it. But running offline is in no way compelling for AI companies, and it’s not really a reality right now unless you have a bunch of money and electricity to burn (if you can find any hardware left to buy). And even if you do have the capability ro run a large model locally, the models you’re going to get are not able to compete with what the commercial companies are offering. Plenty of the open source models are really good, but they’re not great.

So I’m an avid AI user who is also a skeptic. And I’m trying to reconcile those two views. Or it will just get to a point where I have no choice, and I’m forced to “mourn the passing of our craft”. Or just focus on InfoSec.


  1. I need to point out that not all developers are on board with letting AI take the reigns. There’s of course plenty of people who aren’t keen, are detractors and those who are vehemently against AI. However, it does feel like we’ve hit a tipping point in the last few months, and I’m not the only person who thinks so. ↩︎

  2. It’s funny that advertisers ruined the web experience and then wonder why people prefer to ask an AI to go off and do the browsing for them. I’m sure I’m the only one to make this observation. ↩︎

  3. These are made up arguments for made up commentators. ↩︎

  4. Sorry, I can’t let this go. How did “intended Copilot experience” make it through presumably numerous internal reviews and passes by marketing, lawyers, and whoever else. How was this the best they could do? Or did Copilot write the apology itself? ↩︎