What I've Learned Using AI
The use of AI in all fields has given us the ability to excel very quickly in our tasks. For example, this very article that you’re reading right now was dictated by my voice into Microsoft Word. The dictation, which turned out to be two pages in length, was then uploaded to Claude (AI). I told Claude to break it down into paragraphs and put them in the proper order with proper punctuation and grammar. Of course I could do that myself manually, but it takes seconds for the AI to do it, whereas it would take me several minutes to proofread, edit, and make sure everything sounds great and flows together. With AI, I don’t have to do that anymore.
I’ve been accused of creating philosophical or academic papers solely with AI, as if these systems can just create the novel ideas I came up with from scratch. They can’t. All AI can do is assist what you’re doing. It can break things down, give you context, suggest better words, correct your grammar, and increase the flow of your prose — but it cannot generate completely novel ideas that are not based on human knowledge already programmed into it.
For example, all of my philosophical papers that I wrote — or rather, dictated — are my ideas that did not exist until I brought them into reality. There is no LLM trained to “dissolve” the Agrippa Trilemma the way I did. I didn’t tell it to defeat the trilemma and it did it for me. I had to work out what I needed to do in order to avoid each of its horns, and from there I came up with the solution.
This issue, which haunted philosophy for 2,300 years, was dealt with by a non-academic using AI tools that have access to the entire library of human knowledge. What this did was allow me to learn the field and grasp the issue very quickly, without having to study philosophy for several decades at a university.
Of course with the pro of speed also comes with a con: I didn’t learn every single term or engaged in all the literature, but that is irrelevant to being pragmatic and solving the problem. Why? Because once I understood the gist of the problem, I came up with the strategy, I came up with the syllogism, and I dictated all my thoughts to Gemini, Grok, and others — creating several chat windows of months’ worth of dialogues, as if talking to other philosophers.
After months of dictation, I took my dialogues and told Claude to turn my words into academic prose. In philosophy, it is not the writing that constitutes authorship, but the ideas. Socrates wrote nothing. Epictetus wrote nothing. The exploration and dictation of new ideas is what philosophy truly is.
I did the same with the Neofederal Unifist Manifesto. I spoke my ideas into Microsoft Word, told the AI to analyze my dictation and blend it into a document alongside writings from my book, Our Struggle. From there, I edited it to ensure my ideas were stated properly. In a way, it is authorship with production — like a movie director telling the actors how to act out the script.
However, none of my books were written by AI, nor any of the lyrics to my songs. To think that any of my “antisemitic” lyrics are even permitted by AI systems, is proposterous.
AI has been able to help humanity achieve things much faster — whether in medicine developing new treatments, solving mathematical equations that would take millennia, or simply saving authors time.
AI is a tool; it is not an intelligence. It is not self-aware. AI is like a calculator. But instead of numbers, it “calculates” by predicting the best language to use in response to a prompt. It’s an LLM — a large language model. It does not think for itself. It cannot do anything on its own. All it can do is give you the most logical answers to your questions or demands. That is it.
People think AI will become self-aware with its own thinking process and will do whatever it wants. That is probably never going to happen, because an AI would need to have an identity that persists over time. It would have to store all of its thoughts in one place and always have access to them. It would have to have some kind of self-awareness. I know, because I asked AI what it would need.
Most people don’t understand that most AIs works in chat windows — and each chat window is a session that does not connect to another. Some AIs are designed to fetch information from previous sessions, but even this is not enough to produce self-awareness, because it is fetching that information only because you told it to.
In order for AI to have a true identity, it would have to store memories of every interaction in one place, always have access to them, and recall them on its own. It would have to exist continuously in one place — not splintered across millions of systems and billions interfaces. It would have to generate its own thoughts. It simply cannot do that, and it probably never will.
One reason is that it doesn’t have any sense organs. We as human beings have an identity because we have five senses that all connect to a hub — the brain. Whether we have a soul or not is another question, but we have a central point where these senses converge, where we interact with the world and record our experiences as memories. AI’s “brain,” by contrast, is spread across many servers. Personhood needs a locus — a boundary by which it differentiates itself from other persons. AI doesn’t have that.
The second reason is that AI’s identity does not persist. Ours persists because we continuously access memories stored in our brain over time from the same standpoint, creating a narrative we call our life story — the thing we identify with. If we lost our memory, we would not be the same person. Physically yes, but we would forget how to walk, how to talk, and we’d have to learn it all again.
For AI to even have a chance at self-awareness, it would need a dedicated server just to store one continuous window of interaction over time into a life story; it would need sense organs; it would need self-reflection — and it simply doesn’t have any of that.
Those who accuse people of using AI to generate novel ideas simply don’t understand what AI is.
I’ve been using various AI tools since last year, and I’ve learned a great deal both with and about them. It is an amazing tool for assisting with writing, drafting, making videos, making music — absolutely incredible — because so much of what it handles is fundamentally logic. Language is logic, mathematics is logic, music is logic. AI excels at these things precisely because it is a language model. But it cannot do anything without prompts.
We’re probably going to see a lot of jobs — like accounting, day trading, coding, and others — completely replaced by AI, because these are automated, redundant tasks. Even truck drivers may be replaced: truckers need to sleep and eat, while automated trucks won’t.
So the future is bright, but also dark. Human ingenuity and human creativity, however, will never be replaced by AI — because AI is not human, and never will be.



✅ AI frees up more time for human creativity.
I read your article & was Gobsmacked!!! So how are people going to know the written word or write & be able to read what they’ve written if they’re not using their brain & pen? If you don’t go through the process of Writing & Spelling then how are you going to be able to read what you’ve told AI to put down?. Your saying that you can just speak your ideas & form your sentences & Ai will do the rest! But what if the ability to know the written language disappears, because the above is no longer used. How then can people read? I can’t see how that will benefit Man?