Artificial Intelligence
-
- Posts: 1018
- Joined: Mon Jun 29, 2020 10:08 pm
One of my friends is using it somewhat. He's got >30 years programming under his belt and says it can give him a code sample that's 70% correct a little faster than he can do it himself or look it up on stackoverflow. His advantage is that he can spot what is and isn't correct immediately. The rest of his colleagues - all much younger - struggle with the 'spotting what's correct' bit.JM2K6 wrote: Tue Apr 09, 2024 7:42 amI do not know many good programmers saying AI can do a lot of work for them, and I know a lot of programmersJethro wrote: Tue Apr 09, 2024 1:35 amRaggs back in the 1960s the claim was the COBOL programming language would allow Managers to write systems and you wouldn't need those pesky nerds down in the basement, that turned out not to be entirely correct.Raggs wrote: Mon Apr 08, 2024 10:50 am
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.
Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
What they are calling AI now a days is media driven, the term for what we have is "Expert Systems" (yeah not as sexy for sure), or as I like to call it smoke and mirrors (code all possible answers to a question and follow a decision path, it the answer isn't what you want ask the question in a different way till you get an answer your system can interpret using something called "fuzzy logic".
My niece is currently studying AI, and is that ever complex, she reckons a few research orgs have true AI but at a very fundamental level.
The computer apocalypse is quite a way off folks.
Those Zeihan articles were interesting and put some actual numbers on what is the big issue - the inbreeding of training data is only going to get worse and the resources needed to train are only going to get larger. I'm finding too many AI generated webpages that sound convincing for a couple of sentences and then you realise it's complete bollocks.
The bad part in the short term is that a huge amount of investment capital is being pissed up the wall that could be going in to something useful.
I often wonder what would have happened to Autonomy if it hadn't been sold to HP, they were kind of at the forefront of analysing huge datasets across lots of different media, very definitely a precursor to what is happening now. I bet Mike Lynch regrets selling it!
-
- Posts: 3398
- Joined: Tue Jun 30, 2020 7:37 am
I think he always wanted the big payday.epwc wrote: Tue Apr 09, 2024 11:12 am I often wonder what would have happened to Autonomy if it hadn't been sold to HP, they were kind of at the forefront of analysing huge datasets across lots of different media, very definitely a precursor to what is happening now. I bet Mike Lynch regrets selling it!
Autonomy used Bayesian probabilistic methods (so they claimed, at any rate) which doesn't always require such large datasets. One of their selling points- and I stress selling point, I'm not sure what the actual capability was - was the ability to parse text documents and auto-classify, and some view of doing that for videos and media clips. I only ever saw it done on one of Obama's speeches, and there was an underlying feeling it had been fettled and finessed to get it to work - I'm not convinced there was really a huge amount under the covers.
In their defence, there's always an element of faking before making, so you're right to ask what might have happened had they carried on as a separate entity, but I'm not convinced there was a huge amount of substance behind many of their promoted claims.
Lets say there's 1gb of about 500 similar images (the minimum you can get away with) and tolerances have been set to mostly follow that training data, it's pulling from a much larger database that went into making the model to produce variations using one prompt say "expressionist". But how much larger would the data that went into producing the model need to be to have a minimum viable product. I can get a sense of the size of part A but not part B. The model, wouldn't be only examples of expressionists, but also multiple other parts (people, buildings, etc).JM2K6 wrote: Tue Apr 09, 2024 10:42 am Os - you can't effectively train these models solely on your own data. It simply isn't enough. They require huge datasets in order to function at the most basic level.
Take the TikToks that look suspiciously like "Walking Through Tokyo 4K" Youtube vids, it probably didn't need many of those vids (if it's anything like image generation), does it then need the entire video content of the internet to make part B/the model and nothing less than that? Because if that's the case it's a lot less viable than I thought. My assumption was they were scraping the entire internet because they were attempting to make a generalist magic eight ball, and not that janky 30 second videos required all that.
I appreciate this may be a bit how long is a piece of string.
Last edited by _Os_ on Tue Apr 09, 2024 4:24 pm, edited 1 time in total.
-
- Posts: 3793
- Joined: Tue Jun 30, 2020 9:37 am
I bet he does considering he's staring down a long prison sentence.epwc wrote: Tue Apr 09, 2024 11:12 am I often wonder what would have happened to Autonomy if it hadn't been sold to HP, they were kind of at the forefront of analysing huge datasets across lots of different media, very definitely a precursor to what is happening now. I bet Mike Lynch regrets selling it!
- Hellraiser
- Posts: 2272
- Joined: Tue Jun 30, 2020 7:46 am
Not a hope and drones are not replacing pilots, I don't know where you got that idea.Sandstorm wrote: Mon Apr 08, 2024 8:16 pm Surely soldiers get replaced by AI before anyone else? Drones are already replacing pilots. You won’t need meat bags in fatigues in the next decade.
Ceterum censeo delendam esse Muscovia
With no other data sets available, to train a LLM or similar genAI you're generally talking billions of data points. Many gigabytes, usually terabytes. 500 images alone_Os_ wrote: Tue Apr 09, 2024 2:35 pmLets say there's 1gb of about 500 similar images (the minimum you can get away with) and tolerances have been set to mostly follow that training data, it's pulling from a much larger database that went into making the model to produce variations using one prompt say "expressionist". But how much larger would the data that went into producing the model need to be to have a minimum viable product. I can get a sense of the size of part A but not part B. The model, wouldn't be only examples of expressionists, but also multiple other parts (people, buildings, etc).JM2K6 wrote: Tue Apr 09, 2024 10:42 am Os - you can't effectively train these models solely on your own data. It simply isn't enough. They require huge datasets in order to function at the most basic level.
Take the TikToks that look suspiciously like "Walking Through Tokyo 4K" Youtube vids, it probably didn't need many of those vids (if it's anything like image generation), does it then need the entire video content of the internet to make part B/the model and nothing less than that? Because if that's the case it's a lot less viable than I thought. My assumption was they were scraping the entire internet because they were attempting to make a generalist magic eight ball, and not that janky 30 second videos required all that.
I appreciate this may be a bit how long is a piece of string.
will likely get you output that basically looks like one of those images or recognisably just a few mashed together; 500 images as a way to fine tune a model with an existing large data set in the area you're interested in is probably 10x too small but not completely unthinkable.
A lot depends on the quality of the original data set. When we were working on an anomaly detection system a few years ago, we had billions of data points but we simply didn't have enough metadata at the time to genuinely make use of it effectively. And that's one of the simpler methods; anomaly detection is (I believe, but could be wrong) a much easier nut to crack than language or the video/image equivalents. An effective LLM usually has billions of parameters (largely equivalent to the metadata or tags, I guess) and so you need a very large, very broad dataset that can provide all that. ChatGPT-3 had 175 billion parameters, for example.
I don't think there is a simple answer to your question about how much data you need beyond "a hell of a lot". Articles like this one attempt to quantify it to some degree.
As for the scraping of the internet for the videos - my assumption is that they are using some associated text as metadata/parameters along with the video tags, but everything else is probably funelled into separate specific sets for other models. This is all assumption on my part; I know less about the video side, but it's also the one that is most obviously janky and most obviously a solution in search of a solution (remember those videos last year where someone got their AI to replicate movie trailers? Horrific shit, just utterly dreadful and totally useless - but recognisable by humans to a certain extent and so everyone assumed it would be a short step to AI generation of whole movies)
Remember video is just a series of frames, so it's essentially a continuation of the theme: the model makes a guess as to what should come next based on the prompt, the data set, and the parameters within. I expect it's easier in many ways for a model to make that guess, because there is a logical link and progression between frames of video in the data sets that probably doesn't exist for image sets.
It's always instructive to look past the glamour of the product demos and actually look at what's being displayed. The videos are simultaneously the most obviously impressive and yet the most obviously broken and nonsense application of this technology. I simply don't understand who it's for.
Exactly my point, COBOL probably lead to an increase in programmers not a decrease, though I would have loved to see an Accounting Manager try and do anything with COBOLJM2K6 wrote: Tue Apr 09, 2024 7:42 amI do not know many good programmers saying AI can do a lot of work for them, and I know a lot of programmersJethro wrote: Tue Apr 09, 2024 1:35 amRaggs back in the 1960s the claim was the COBOL programming language would allow Managers to write systems and you wouldn't need those pesky nerds down in the basement, that turned out not to be entirely correct.Raggs wrote: Mon Apr 08, 2024 10:50 am
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.
Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
What they are calling AI now a days is media driven, the term for what we have is "Expert Systems" (yeah not as sexy for sure), or as I like to call it smoke and mirrors (code all possible answers to a question and follow a decision path, it the answer isn't what you want ask the question in a different way till you get an answer your system can interpret using something called "fuzzy logic".
My niece is currently studying AI, and is that ever complex, she reckons a few research orgs have true AI but at a very fundamental level.
The computer apocalypse is quite a way off folks.

- Insane_Homer
- Posts: 5506
- Joined: Tue Jun 30, 2020 3:14 pm
- Location: Leafy Surrey
Had a go at using whisper.ccp offline in WSL Debian.
5 mins to get it setup and compiled.
Then used the medium English model (less than 1 GB) to transcribe the first 20 mins of a meeting from Friday.
The model is a bit slow but scary accurate. Picked up non-standard words like aSite product name without a problem. Coped well with both South African and Scandinavian accents.
The lighter, quicker model was not as accurate.
5 mins to get it setup and compiled.
Then used the medium English model (less than 1 GB) to transcribe the first 20 mins of a meeting from Friday.
The model is a bit slow but scary accurate. Picked up non-standard words like aSite product name without a problem. Coped well with both South African and Scandinavian accents.
The lighter, quicker model was not as accurate.
“Facts are meaningless. You could use facts to prove anything that's even remotely true.”
- Insane_Homer
- Posts: 5506
- Joined: Tue Jun 30, 2020 3:14 pm
- Location: Leafy Surrey
“Facts are meaningless. You could use facts to prove anything that's even remotely true.”
- Insane_Homer
- Posts: 5506
- Joined: Tue Jun 30, 2020 3:14 pm
- Location: Leafy Surrey
LM Studio is a great app for Windows and Linux if you want to play with private LLMs.
Simple setup, zero config to get GFX cards (NVidia and MAD) recognised with ROCm or CUDA needed.
Well thought out GUI.
It's pretty fast on my work PC which only has a Nvidia 1080 (8GB) but is super fast at home on AMD 9800XT (20 GB). Both run the Llama-3.1 as quick as of the web based offerings.
Simple setup, zero config to get GFX cards (NVidia and MAD) recognised with ROCm or CUDA needed.
Well thought out GUI.
It's pretty fast on my work PC which only has a Nvidia 1080 (8GB) but is super fast at home on AMD 9800XT (20 GB). Both run the Llama-3.1 as quick as of the web based offerings.
“Facts are meaningless. You could use facts to prove anything that's even remotely true.”
So I have been tasked with researching and implementing AI solutions for our schools, initially looking at automating back office tasks and curriculum planning. I knew I shouldn’t have put on my CV that I did some basic and hexadecimal on an RML380Z when I did my computer science O Level in 1985.
- Paddington Bear
- Posts: 6653
- Joined: Tue Jun 30, 2020 3:29 pm
- Location: Hertfordshire
Client asked us to use an AI transcribing system earlier. It missed a lot, got some bits entirely wrong and clearly couldn’t understand southern English accents properly. Fortunately we transcribed ourselves but so much of this stuff is a long long way still from living up to its promise
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
Maybe that should be re-phrased to "decent programmers", on a three month contract fixing major issues with a system written by Indian developers, all of whom were sacked last week for I guess incompetence. To say the system is a mess is an understatement.
For those after a bit more AI, free course available
https://itmasters.edu.au/short-courses/ ... on-coders/
I am very mixed about it.
From the perspective of artists it's a concern because it pits them against some ridiculously good and quick AI work. But you can quickly see the difference. I saw an advertisement for a burger the other day. Man that burger just popped. But the closer you looked the less appetizing it looked. You just don't get that " I am going to get all greasy and it's going to be so juicy "...feel.
For visual artists it's the same. Some AI work is insane and you just have to have it inked somewhere on your body.
My industry has become very automated. Manufacturing.
However
I have discovered that people would still rather have something made by a person's hand. And are prepared to pay for it. If it's decent quality of course.
So yes it might be difficult to compete, but if you put out a product that becomes desirable, you will be making good margins. So we are back to the basic principle of business. The best will survive and flourish
From the perspective of artists it's a concern because it pits them against some ridiculously good and quick AI work. But you can quickly see the difference. I saw an advertisement for a burger the other day. Man that burger just popped. But the closer you looked the less appetizing it looked. You just don't get that " I am going to get all greasy and it's going to be so juicy "...feel.
For visual artists it's the same. Some AI work is insane and you just have to have it inked somewhere on your body.
My industry has become very automated. Manufacturing.
However
I have discovered that people would still rather have something made by a person's hand. And are prepared to pay for it. If it's decent quality of course.
So yes it might be difficult to compete, but if you put out a product that becomes desirable, you will be making good margins. So we are back to the basic principle of business. The best will survive and flourish
Be sure you read all the guidance out there, its a legal minefieldsefton wrote: Fri Sep 27, 2024 4:32 pm So I have been tasked with researching and implementing AI solutions for our schools, initially looking at automating back office tasks and curriculum planning. I knew I shouldn’t have put on my CV that I did some basic and hexadecimal on an RML380Z when I did my computer science O Level in 1985.
https://www.gov.uk/government/publicati ... -education
ICO has a lot of guidance on their site too, but governance, ethics and transparency are at the core of it all, i am in the AI strategy group for my lot and it is a legal nightmare what you can and cannot do.
I have been looking into that and there is no decent one out there yet that is accurate enough to use, we have Copilot for teams and it does a half decent job, but we are strictly limiting what it can be used for.Paddington Bear wrote: Fri Sep 27, 2024 6:12 pm Client asked us to use an AI transcribing system earlier. It missed a lot, got some bits entirely wrong and clearly couldn’t understand southern English accents properly. Fortunately we transcribed ourselves but so much of this stuff is a long long way still from living up to its promise
-
- Posts: 666
- Joined: Mon Jul 06, 2020 9:46 am
I also think that there will be a lot more value to actual artist works over time. The guys that will suffer the most with AI are the "fiverr" type guys that that almost copy/paste coding/designs over and over as new designs for new clients. But when you need something original and creative, human hands will still be required for some time to come.Sards wrote: Sat Sep 28, 2024 5:17 am I am very mixed about it.
From the perspective of artists it's a concern because it pits them against some ridiculously good and quick AI work. But you can quickly see the difference. I saw an advertisement for a burger the other day. Man that burger just popped. But the closer you looked the less appetizing it looked. You just don't get that " I am going to get all greasy and it's going to be so juicy "...feel.
For visual artists it's the same. Some AI work is insane and you just have to have it inked somewhere on your body.
My industry has become very automated. Manufacturing.
However
I have discovered that people would still rather have something made by a person's hand. And are prepared to pay for it. If it's decent quality of course.
So yes it might be difficult to compete, but if you put out a product that becomes desirable, you will be making good margins. So we are back to the basic principle of business. The best will survive and flourish
In manufacturing there also had to be an adjustment as automation took over and as you said, "hand-made" became more valuable. You just need to look at big brands around the world that still do hand-made products and what those products go for. Bang & Olufsen is a good example, most of their products are still put together by hand and you really pay for it compare to other brands that are almost fully automated.
For me AI will and are very useful for repetitive work that can take up a lot of time. I think I mentioned that my lawyer friend uses AI to search for international law, doing initial contract drafts, etc. and she says it has saved her a lot of time. Yes she still have to check and finalise stuff manually but it saves a lot on menial work.
- Insane_Homer
- Posts: 5506
- Joined: Tue Jun 30, 2020 3:14 pm
- Location: Leafy Surrey

“Facts are meaningless. You could use facts to prove anything that's even remotely true.”
- fishfoodie
- Posts: 8729
- Joined: Mon Jun 29, 2020 8:25 pm
Seems that other lawyers don't share PBs concerns about using AI to write briefs .... with predicable results !
If I was paying these goons millions to defend me, I'd have their bollocks nailed to a wall somewhere for pulling this stunt !
https://mashable.com/article/mypillow-l ... yer-filingLawyers for MyPillow CEO and presidential election conspiracy theorist Mike Lindell are facing potential disciplinary action after using generative AI to write a legal brief, resulting in a document rife with fundamental errors. The lawyers did admit to using AI, but claim that this particular mistake was primarily human.
On Wednesday, an order by Colorado district court judge Nina Wang noted that the court had identified almost 30 defective citations in a brief filed by Lindell's lawyers on Feb. 25. Signed by attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff, the filing was part of former Dominion Voting Systems employee Eric Coomer's defamation lawsuit against Lindell.
"These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," read Wang's court order.
The court further noted that while the lawyers had been given the opportunity to explain this laundry list of errors, they were unable to adequately do so. Kachouroff confirmed that he'd used generative AI to prepare the brief once directly asked about it by the court, and upon further questioning admitted that he had not checked the resultant citations.
If I was paying these goons millions to defend me, I'd have their bollocks nailed to a wall somewhere for pulling this stunt !
- fishfoodie
- Posts: 8729
- Joined: Mon Jun 29, 2020 8:25 pm
I now have a new term to use in work; "AI slop"
https://arstechnica.com/gadgets/2025/05 ... abilities/
That's exactly what AI often turns out when there's no human in the loop checking the output.
Today I was configuring services in AWS & trying to work out why it wasn't working as expected, so I repeated my automated steps, step by step in the console to see where things broke, & instead of giving me a nice red box telling me I was a silly boy & explaining my mistake, it completed without error, but left me with an "information" message, which I must have read a dozen times, & still couldn't make head or tail of it, it told me what I'd done was broken, but provided no useful information on how it was broken, or how to fix it because it was just a bunch of words mashed together to form AI slop !
https://arstechnica.com/gadgets/2025/05 ... abilities/
That's exactly what AI often turns out when there's no human in the loop checking the output.
Today I was configuring services in AWS & trying to work out why it wasn't working as expected, so I repeated my automated steps, step by step in the console to see where things broke, & instead of giving me a nice red box telling me I was a silly boy & explaining my mistake, it completed without error, but left me with an "information" message, which I must have read a dozen times, & still couldn't make head or tail of it, it told me what I'd done was broken, but provided no useful information on how it was broken, or how to fix it because it was just a bunch of words mashed together to form AI slop !
Huh uhuh huh uhuh he heh heheh he uhuhuhfishfoodie wrote: Sun May 04, 2025 11:32 am Seems that other lawyers don't share PBs concerns about using AI to write briefs .... with predicable results !
https://mashable.com/article/mypillow-l ... yer-filingLawyers for MyPillow CEO and presidential election conspiracy theorist Mike Lindell are facing potential disciplinary action after using generative AI to write a legal brief, resulting in a document rife with fundamental errors. The lawyers did admit to using AI, but claim that this particular mistake was primarily human.
On Wednesday, an order by Colorado district court judge Nina Wang noted that the court had identified almost 30 defective citations in a brief filed by Lindell's lawyers on Feb. 25. Signed by attorneys Christopher Kachouroff and Jennifer DeMaster of law firm McSweeney Cynkar and Kachouroff, the filing was part of former Dominion Voting Systems employee Eric Coomer's defamation lawsuit against Lindell.
"These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," read Wang's court order.
The court further noted that while the lawyers had been given the opportunity to explain this laundry list of errors, they were unable to adequately do so. Kachouroff confirmed that he'd used generative AI to prepare the brief once directly asked about it by the court, and upon further questioning admitted that he had not checked the resultant citations.
If I was paying these goons millions to defend me, I'd have their bollocks nailed to a wall somewhere for pulling this stunt !
He said - Wang
Huhu uhuhuhuhuhjh