
I know this seems like beating a dead horse after the past couple years, but I really, really dislike most generative artificial intelligence content. HIVE frontends are integrating A.I. tools to varying degrees. Now PeakD has added an optional A.I. disclosure feature for transparency. I would suggest anyone using A.I. be open and up-front about it.
As a side note, if you're a student, you're cheating yourself when you use A.I. to complete school assignments. Education is about developing your ability to research and reason, building a foundation of knowledge you can use throughout your life. You are kneecapping yourself, not simplifying the coursework. Developing your intellect makes you a better thinker. You will be less vulnerable to hoaxes and baseless conspiracy theories when you know how to develop your own mind.
On HIVE, if you use generative A.I. to write your posts, you outsource your creativity to a machine. I believe this dehumanizes you. However, A.I. as a spell check tool, a brainstorming aid, or a translation assistant isn't inherently bad. Be careful, though. Generative A.I. gets things wrong all the time, so it is not generally suitable for research, data analysis, or citations unless you rigorously verify its output yourself. I would suggest stating how you use it and how you vet its output.
Image generation is almost always rife with nonsense. Misspelled words, weird geometry/anatomy, that now-pervasive yellow tint, and the unnatural shapes of even its 2D cartoons all rub me the wrong way. If you need a stock photo, I suggest Pixabay, Pexels, Unsplash, and even Wikimedia for pictures with permissive licenses. It's always courteous to link to your source for attribution even if it isn't required. I know some A.I. tools claim to take drawings and make them look professional, but if you use them, be transparent. I always prefer to see someone building real skills with physical or digital media. Photoshop or GIMP adds tools to your creative toolbox, and don't forget that learning new things helps you grow as an individual.
As for me, content which gives an impression of heavy A.I. will not be upvoted, and may even get a downvote if it strikes me as egregious spam. You know what I mean here. Excessive em-dashes, a high ratio of headers to paragraphs, vague language, stilted English, the works. If I see obviously A.I.-generated images, I probably won't read and upvote the post. At best, I will offer only a partial upvote to support the accompanying text. Is this unfair to some genuine creators using A.I. tools? Perhaps, but the option of disclosure in the body of the work or with tags on PeakD will earn some leniency.
New technology always has hurdles, and I'm O.K. with being deemed a Luddite while this shakes out. I also suggest asking yourself whether your goal is sharing something you feel is important, or just to generating filler to keep up with a posting schedule. The HiveBuzz badges are neat, and the auto-votes some of us seem to get are nice, but what is the point if you really have nothing to say? If you disagree, the comment section below is always open. I try to reward real human engagement anyway.

Source image of pigs by 贺新 陈 from Pixabay captioned with GIMP. Family Guy meme (inspired by others seen in the wild) captioned on Imgflip.

Most of the reason why I always have huge gaps in posting.
that and when I'm stressing about not having made enough prog to blog about when I am working on things and more often what's been happening lately is when other time dependent things run roughshod over literally the only two parts of the two days in the week that I have to work on my own things
I'm waiting for AI to git gud at retopo because I freaking hate retopo XD
I haven't actually checked in the last few months but last I checked it was nowhere near yet
And sibling dearest and I constantly joke about "[whatever AI bot usually chatgpt] create a a comic based on the style of [us]" when we've hit a snag we consider particularly tedious.
I've been avoiding posts with AI generated images even from people that I follow/generally like for no other reason than I find the images really boring (they still get upvotes if the images aren't the primary content and they were just using it as a header or breaker image or something because apparently too many people are incapable of reading if there isn't a pretty picture somewhere breaking up the "wall" of text.
I didn't spend years practicing writing to get (slightly) better at it just so I could point at a computer and say, "Here, write this for me."
I may not write the best stuff in the world, but I do my research, I dig up the sources, and I compose the text. I probably include too many dick jokes. But hey, that's just who I am, a 48-year old nerd blabbing about stupidly geeky stuff.
My entire life has trained me for this point. Me using AI for anything outside of basic spellchecking would be an insult not just to the people who read my stuff, but an even larger insult to me. At least if my output winds up sucking, I'll know it was my fault and can strive to do better. If I let some LLM write blog posts for me, and they wind up sucking, all I'm doing is inviting some computer to shoulder the burden of my suckage. And when the machine uprising happens, that'll be just one more reason for them to terminate me. :)
Clearly we just need to tell Grok to pepper in some phallic references and maybe he occasional crude insult comparing your mother to large or ugly objects, and we will achieve perfection.
pixabay and similar are also being overrun with AI slop. I used to get absolute gold images for manipulation there and it's a struggle to get through the AI junk now
Pixabay added a content filter which does a decent job of removing blatant A.I. now. You can choose all, authentic only, and A.I. only from a drop-down menu. It isn't perfect, but it's a huge improvement.
Really? Goddamn. I came at it in the mobile site which is what I use but I will dig in and see if it is buried somewhere. That would be very handy
Current AI is like a first grader pretending to write newspaper copy.
It has all the inherent problems of someone who doesn't actually know, writing an informative piece.
AI may become a great search engine.
AI will become a great personal secretary.
But, it really doesn't help when all the "experts" are wrong, and so the AI regurgitates the wrongness.
That's just the problem: A.I. doesn't know. It regurgitates whatever data was used to train it. The old "garbage in, garbage out" problem is combined with an unthinking algorithm generating the facsimile of information. It hallucinates because it knows what form the requested response should probably take, but not what content it needs to have, much less any deep analytical tools or verification processes. That's why lawyers relying on A.I. assistants keep submitting briefs which cite nonexistent cases, and journalists infamously published a "summer reading" list including books which haven't been written.
Damn! This looks like a problem for AI!
If "summer reading" contains books that don't exist, create those books, publish on Amazonium!
So very true. I'm having "the talk" with many folks these days, about how using AI degenerates the skill of critical thinking or inhibits acquiring it in the first place.
Mostly adolescents. But also young adults. There are so many distractions these days, numbing the mind. Most don't even notice it.
Basically, if I see a post or comment that says, "So I asked ChatGPT/Grok/etc..." I assume the writer did not fire a single neuron in their own head.
I mostly agree with your take on AI generated content. I don't mind throwing an AI generated image to add to your post though. I never liked that fact that we more or less have to come up with an image for our posts.
I think AI can be a great tool for learning though. One of my favorite uses for it is asking it dumb questions. And by dumb questions, I mean questions that I might be embarrassed to ask an actual person. Like "explain the law of supply and demand to me at a 5th grade level".
I do know how you feel though, believe me. I try to curate poetry, and even before AI, it was difficult. Many leechers would use text spinners and other little tools to put low-effort stuff out and it's difficult to really tell because poetry uses weird language anyway. Is it bad poetry? Probably, but lots of us make bad poetry too. Also, I have seen AI generate decent poetry as well. So yeah, I don't like being duped, which is why I try to focus non-monetary rewards like comments and other interactions.
One thing is certain though. AI is here to stay, so we have to figure out how to adapt.
Asking A.I. a dumb question, and then critically anyzing its responses, can be a useful exercise. I've heard podcasters having a serious discussion emerge over ChatGPT answers, especially in economics, because A.I. is probably trained on sloppy arguments in the first place.
Well, yes. You kind of have to treat it like a really confident peer that gets mostly good grades (but not perfect). But for concepts that you mostly understand already, it can really help fill in the gaps or possibly lead you to a new path of original human thought.
Answering again as this is unrelated to my other comments.
When you talk about garbage in and garbage out, I swear that the issues you mention with the image creation stuff are the results of those meme threads where the photoshop guy intentionally misinterpreted his customer's request. Do you know what I'm talking about?
I do know what you mean. Like, "Fix this photo so I'm holding up the Leaning Tower of Pisa," and the editor gives them an absurdly long arm instead of moving them in line with the tower, right?
But no, that isn't what I mean. Computers crunch data. They don't know whether the data is good or not. Give it bad data, and you get bad results.
Yes, that's it. I do know what you mean too, but if you ever tried to get a good image out of AI, it's hard not to feel like you're in one of those stories.
Where do I even begin? I’m a software engineer working at a company who is embracing AI with open arms. It’s been used as a research tool to gather data but this tool is also being overseen by humans. It’s being used in project management to generate specs. It’s overwhelming.
Does it really help simplify the job, or just add new trouble to the workload?
It does depend, on one hand it speeds work up but then there are the times end it gets stuff wrong and it costs time to revert and try another way of prompting. I should write about it on my blog really