Artificial Intelligence

Where goats go to escape
C T
Posts: 229
Joined: Tue Jun 30, 2020 2:40 pm

More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

C T wrote: Mon Apr 08, 2024 9:27 am More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
It's only going to get better, and probably quickly. I'd be very careful of looking into any career that can be supplemented with AI, unless you have a serious passion for it.

If humanity does it right, it's a huge positive. As it won't, it's going to cause issues.
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

Paddington Bear wrote: Mon Apr 08, 2024 10:48 am I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
inactionman
Posts: 2350
Joined: Tue Jun 30, 2020 7:37 am

C T wrote: Mon Apr 08, 2024 9:27 am More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
I'm all for it for things such as cancer detection, where it can harness the ability to interrogate millions of scans and recall and derive associations across a very small number of equivalent samples. I'm very much against it when applied to e.g. helpdesks where context and nuance is missed. There's a reason an experienced helpdesk operator is a valuable asset.

It's not some magical thing, it's just massive pattern matching. Give it a load of MRI scans to process and it will pick up interesting features that human doctors might not notice. But the human doctor takes the result of that AI and considers it within their deliberation. It doesn't remove the doctor.
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

Raggs wrote: Mon Apr 08, 2024 10:50 am
Paddington Bear wrote: Mon Apr 08, 2024 10:48 am I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Yeah I can see this.

I work in law where a lot of what takes up our time is quite menial compared to what actually adds value. Could AI help with this? Definitely. Will it be more cost effective? Unlikely for a long time. Have I come across any automated system which I would use without checking myself before passing work across the desk of a partner (let alone a Court etc)? Not even close.

More generally, the amount of systems/processes that fall down unless and until you can find a human to talk to are astonishing. There’s a good chance AI changes roles, but I can’t see it meaningfully replacing people across the board
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

Paddington Bear wrote: Mon Apr 08, 2024 11:30 am
Raggs wrote: Mon Apr 08, 2024 10:50 am
Paddington Bear wrote: Mon Apr 08, 2024 10:48 am I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Yeah I can see this.

I work in law where a lot of what takes up our time is quite menial compared to what actually adds value. Could AI help with this? Definitely. Will it be more cost effective? Unlikely for a long time. Have I come across any automated system which I would use without checking myself before passing work across the desk of a partner (let alone a Court etc)? Not even close.

More generally, the amount of systems/processes that fall down unless and until you can find a human to talk to are astonishing. There’s a good chance AI changes roles, but I can’t see it meaningfully replacing people across the board
Does that check take longer than doing all the menial work in the first place? And if you have a way to update the model, reporting what was right, what was wrong, and what was lacking, it will get better.
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

Raggs wrote: Mon Apr 08, 2024 11:34 am
Paddington Bear wrote: Mon Apr 08, 2024 11:30 am
Raggs wrote: Mon Apr 08, 2024 10:50 am

Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Yeah I can see this.

I work in law where a lot of what takes up our time is quite menial compared to what actually adds value. Could AI help with this? Definitely. Will it be more cost effective? Unlikely for a long time. Have I come across any automated system which I would use without checking myself before passing work across the desk of a partner (let alone a Court etc)? Not even close.

More generally, the amount of systems/processes that fall down unless and until you can find a human to talk to are astonishing. There’s a good chance AI changes roles, but I can’t see it meaningfully replacing people across the board
Does that check take longer than doing all the menial work in the first place? And if you have a way to update the model, reporting what was right, what was wrong, and what was lacking, it will get better.
The honest answer to that is it varies.
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

Paddington Bear wrote: Mon Apr 08, 2024 11:36 am The honest answer to that is it varies.
That makes sense, but if it's not always, and you can actively give feedback to improve it, it's worth doing (at least for making it better, not for job security!).
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
C T
Posts: 229
Joined: Tue Jun 30, 2020 2:40 pm

inactionman wrote: Mon Apr 08, 2024 11:29 am
C T wrote: Mon Apr 08, 2024 9:27 am More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
I'm all for it for things such as cancer detection, where it can harness the ability to interrogate millions of scans and recall and derive associations across a very small number of equivalent samples. I'm very much against it when applied to e.g. helpdesks where context and nuance is missed. There's a reason an experienced helpdesk operator is a valuable asset.

It's not some magical thing, it's just massive pattern matching. Give it a load of MRI scans to process and it will pick up interesting features that human doctors might not notice. But the human doctor takes the result of that AI and considers it within their deliberation. It doesn't remove the doctor.
That's an interesting one, I can absolutely see that a machine learning/data science type model can have hundred's of thousands of historical MI scans loaded into it and, given the ultimate correct diagnosis and programmed where this diagnosis is apparent in the scan. It'll know more than any human could and ultimately is provided to the doctor to help their conclusion. Really great example of AI adding value.

I've got experience working in a helpdesk context, years ago I used to be a team leader in a call centre. Me and another team leader joined the company, both of us were external hires. Interestingly the current group of team leaders had a culture (led by the manager) that they did not give away write offs. What I mean by write offs is goodwill gestures. "Sorry customer, we made a mistake, have £10", that kind of thing.

They were very proud of this, they would spend sometimes hours talking to customer who had very valid gripes but giving them nothing. Me and the other team leader thought this was utter madness, and eventually got rid of it and even got the place to a point where handlers were empowered to give goodwill gestures.

My experience of a helpdesk environment, is that it basically survives by people every so slightly, but quite frequently breaking to rules. You can never really have a set of rules that fits every customer scenario.

AI in a service setting is like that call centre was before me and the other TL started, very firmly sticking to the rules.
Biffer
Posts: 7873
Joined: Mon Jun 29, 2020 6:43 pm

People are overly impressed with the language models. We have an in built tendency to anthropomorphise things - we see faces in clouds, we assign human emotions to pets, we name cars ffs. We're desperate to see humanity and intelligence when it's not there. These language models are dumb, and they also hallucinate. When asked to write scientific abstracts, they've been shown to make up thirty percent of citations.
And are there two g’s in Bugger Off?
User avatar
ASMO
Posts: 5250
Joined: Mon Jun 29, 2020 6:08 pm

I am involved with the review of AI within government, there have been a lot of rumours out there about just how good it is....frankly at the moment, it is not the silver bullet people think it is.
There are in effect 3 types of AI, Narrow/Weak, Strong or Deep AI and Artificial Superintelligent AI. Roughly translated Narrow= Sub Human level of capability, Strong = Human Equivalent and Superintelligent = Exceeds Human.

Right now we are at the weak AI stage, basically it can simulate human behaviour (not replicate it), so you have both generative and non generative variants, but ultimately it can do fairly simple tasks very quickly within very narrowly defined parameters. We are by most reckoning about 100 years from making the step from narrow to Strong AI and there are many who don't believe the Superintelligent will ever be achievable.

As an example, Fujitsu built a huge supercomputer (about the size of a football pitch) and it took all of that combined compute power 40 minutes to replicate a single second of neural activity, and this is just to get to the next level of capability.

Having used AI, i think it is oversold in terms of what i can do, i have used for example Copilot a lot and its a bit meh, where it does add some value is in the security space, but outside of that, its more of a novelty item.
User avatar
Sandstorm
Posts: 9500
Joined: Mon Jun 29, 2020 7:05 pm
Location: England

Raggs wrote: Mon Apr 08, 2024 10:50 am You'll still need programmers, but nowhere near as many to produce the same volume of work.
Sux to be India. They'll rolling out millions every year from Universities.
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

ASMO wrote: Mon Apr 08, 2024 12:13 pm I am involved with the review of AI within government, there have been a lot of rumours out there about just how good it is....frankly at the moment, it is not the silver bullet people think it is.
There are in effect 3 types of AI, Narrow/Weak, Strong or Deep AI and Artificial Superintelligent AI. Roughly translated Narrow= Sub Human level of capability, Strong = Human Equivalent and Superintelligent = Exceeds Human.

Right now we are at the weak AI stage, basically it can simulate human behaviour (not replicate it), so you have both generative and non generative variants, but ultimately it can do fairly simple tasks very quickly within very narrowly defined parameters. We are by most reckoning about 100 years from making the step from narrow to Strong AI and there are many who don't believe the Superintelligent will ever be achievable.

As an example, Fujitsu built a huge supercomputer (about the size of a football pitch) and it took all of that combined compute power 40 minutes to replicate a single second of neural activity, and this is just to get to the next level of capability.

Having used AI, i think it is oversold in terms of what i can do, i have used for example Copilot a lot and its a bit meh, where it does add some value is in the security space, but outside of that, its more of a novelty item.
Fujitsu built that computer in 2013. https://www.cnet.com/culture/fujitsu-su ... -activity/

It was 10.51 petaflops.

This article says some experts reckoned by 2020 we'd have exascale computing.

2022 saw this supercomputer come online: https://en.wikipedia.org/wiki/Frontier_(supercomputer) which is 1.102 - 1.6 exaFLOPS.

That's 100x faster. Apparently they're getting roughly 10x faster every 4 years...

Cloud computer/distributed approaches have apparently already broken 2.5 exaflops.
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
User avatar
ASMO
Posts: 5250
Joined: Mon Jun 29, 2020 6:08 pm

Raggs wrote: Mon Apr 08, 2024 12:32 pm
ASMO wrote: Mon Apr 08, 2024 12:13 pm I am involved with the review of AI within government, there have been a lot of rumours out there about just how good it is....frankly at the moment, it is not the silver bullet people think it is.
There are in effect 3 types of AI, Narrow/Weak, Strong or Deep AI and Artificial Superintelligent AI. Roughly translated Narrow= Sub Human level of capability, Strong = Human Equivalent and Superintelligent = Exceeds Human.

Right now we are at the weak AI stage, basically it can simulate human behaviour (not replicate it), so you have both generative and non generative variants, but ultimately it can do fairly simple tasks very quickly within very narrowly defined parameters. We are by most reckoning about 100 years from making the step from narrow to Strong AI and there are many who don't believe the Superintelligent will ever be achievable.

As an example, Fujitsu built a huge supercomputer (about the size of a football pitch) and it took all of that combined compute power 40 minutes to replicate a single second of neural activity, and this is just to get to the next level of capability.

Having used AI, i think it is oversold in terms of what i can do, i have used for example Copilot a lot and its a bit meh, where it does add some value is in the security space, but outside of that, its more of a novelty item.
Fujitsu built that computer in 2013. https://www.cnet.com/culture/fujitsu-su ... -activity/

It was 10.51 petaflops.

This article says some experts reckoned by 2020 we'd have exascale computing.

2022 saw this supercomputer come online: https://en.wikipedia.org/wiki/Frontier_(supercomputer) which is 1.102 - 1.6 exaFLOPS.

That's 100x faster. Apparently they're getting roughly 10x faster every 4 years...

Cloud computer/distributed approaches have apparently already broken 2.5 exaflops.
Good old Moore's law although i think that is now being challenged due to the physical limitations of silicon and superconductors, Nevins law about quantum computing is too young for it to have any data to support its suppositions.
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

Another question is even if AI can replicate a worker, at what price can it do so? The billions being thrown around will need to be recouped
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
User avatar
Insane_Homer
Posts: 5054
Joined: Tue Jun 30, 2020 3:14 pm
Location: Leafy Surrey

LLMs are not intelligent.
“Facts are meaningless. You could use facts to prove anything that's even remotely true.”
_Os_
Posts: 2027
Joined: Tue Jul 13, 2021 10:19 pm

Sandstorm wrote: Mon Apr 08, 2024 12:22 pm
Raggs wrote: Mon Apr 08, 2024 10:50 am You'll still need programmers, but nowhere near as many to produce the same volume of work.
Sux to be India. They'll rolling out millions every year from Universities.
Low level dev jobs are surely going the way of the dodo. Thing with coding is, it takes a specific type of person to be really good at it. It's not the case someone who isn't top level can just apply themselves more and be as good as someone that's really good.

Low level copy writers and graphic designers are in the same pot. You can already pump some text into GPT and ask it to improve the grammar and tighten the prose, less effort required than using a spell checker. If you need some design work there's AI apps for logos/social media templates/images/business cards, that give impressive results. A lot of what businesses want from this type of work is well crafted regurgitation which AI excels at, they mostly want something unique but generic and not something new and creative (especially the case the smaller the business is).

Easy to imagine in the immediate future declining spending on services needed to provide a professional appearance. In a larger business the small team only doing web/graphic design/copy writing work will get smaller, in an SME the one person who did that work is no longer needed. The person at the print shop doing the exact same work, may still be there but using the same AI tools everyone else is and probably has some new roles.

Seems likely eventually a lot of the white collar bottom rungs on the ladder will be knocked out.
I like neeps
Posts: 3262
Joined: Tue Jun 30, 2020 9:37 am

_Os_ wrote: Mon Apr 08, 2024 1:11 pm
Sandstorm wrote: Mon Apr 08, 2024 12:22 pm
Raggs wrote: Mon Apr 08, 2024 10:50 am You'll still need programmers, but nowhere near as many to produce the same volume of work.
Sux to be India. They'll rolling out millions every year from Universities.
Low level dev jobs are surely going the way of the dodo. Thing with coding is, it takes a specific type of person to be really good at it. It's not the case someone who isn't top level can just apply themselves more and be as good as someone that's really good.

Low level copy writers and graphic designers are in the same pot. You can already pump some text into GPT and ask it to improve the grammar and tighten the prose, less effort required than using a spell checker. If you need some design work there's AI apps for logos/social media templates/images/business cards, that give impressive results. A lot of what businesses want from this type of work is well crafted regurgitation which AI excels at, they mostly want something unique but generic and not something new and creative (especially the case the smaller the business is).

Easy to imagine in the immediate future declining spending on services needed to provide a professional appearance. In a larger business the small team only doing web/graphic design/copy writing work will get smaller, in an SME the one person who did that work is no longer needed. The person at the print shop doing the exact same work, may still be there but using the same AI tools everyone else is and probably has some new roles.

Seems likely eventually a lot of the white collar bottom rungs on the ladder will be knocked out.
It was the same for the computer though no? Technology always creates different types of jobs.

I have an inherent dislike of AI. Big data pattern matching just takes the joy out of life, reducing us all to hundreds of millions of data points and nothing more. Booooring.
User avatar
ASMO
Posts: 5250
Joined: Mon Jun 29, 2020 6:08 pm

Right now, it will work for tick and turn processes, i wouldn't trust it to do anything more than that. You can see how it might integrate well with a CRM solution in a call center and deliver some real savings, but anything where a conscious decision needs to be made based on learning and sometimes intuition, its miles away. We are using it to deliver some sentiment analysis on large data sets, also to look for indicators of safeguarding concerns, but it still needs human oversight to make the decisions. Chatbots are something else we are looking at to help with our public facing services, hopefully slightly better than the DPD one which was hilarious.



For it to really start adding value in other areas you need to 100% be confident of the data sources, that they are accurate and don't contain bias's that could impact the outcome. Security is another bloody nightmare too, who remembers google desktop, basically it was able to surface all sorts across a corporate network, in from restricted areas.
User avatar
Raggs
Posts: 3450
Joined: Mon Jun 29, 2020 6:51 pm

ASMO wrote: Mon Apr 08, 2024 1:48 pm Right now, it will work for tick and turn processes, i wouldn't trust it to do anything more than that. You can see how it might integrate well with a CRM solution in a call center and deliver some real savings, but anything where a conscious decision needs to be made based on learning and sometimes intuition, its miles away. We are using it to deliver some sentiment analysis on large data sets, also to look for indicators of safeguarding concerns, but it still needs human oversight to make the decisions. Chatbots are something else we are looking at to help with our public facing services, hopefully slightly better than the DPD one which was hilarious.



For it to really start adding value in other areas you need to 100% be confident of the data sources, that they are accurate and don't contain bias's that could impact the outcome. Security is another bloody nightmare too, who remembers google desktop, basically it was able to surface all sorts across a corporate network, in from restricted areas.
You don't have to be 100% confident of the data though. You just need to spend less man hours on checking the data, than you would have done collecting it.
Give a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life.
User avatar
Ymx
Posts: 8557
Joined: Mon Jun 29, 2020 7:03 pm

I got in to AI / deep learning some time back.

Using LLMs and some regression models too.

In the LLMs I did a course using Keras, Word vectors, embeddings, and Bert (the Google model a few years back)

Being an engineer I had a look inside the box. I actually could not believe how crudely primitive the whole thing was.

To the extent it was not smart, just told to guess something based on these things. And use these nodes and weights.

To the extent that generative AI was just a stochastic parrot. Guessing the word which should come next (based on the previous words and the seed text) and so on.

In terms of classification models, a good model typically had a success rate of 85% (pretty bloody low for real world decisions at work).

It’s frighteningly dumb.

And yes, every CEO wants AI to run their business efficiently. Can’t AI do this for us?
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

Raggs wrote: Mon Apr 08, 2024 2:22 pm
ASMO wrote: Mon Apr 08, 2024 1:48 pm Right now, it will work for tick and turn processes, i wouldn't trust it to do anything more than that. You can see how it might integrate well with a CRM solution in a call center and deliver some real savings, but anything where a conscious decision needs to be made based on learning and sometimes intuition, its miles away. We are using it to deliver some sentiment analysis on large data sets, also to look for indicators of safeguarding concerns, but it still needs human oversight to make the decisions. Chatbots are something else we are looking at to help with our public facing services, hopefully slightly better than the DPD one which was hilarious.



For it to really start adding value in other areas you need to 100% be confident of the data sources, that they are accurate and don't contain bias's that could impact the outcome. Security is another bloody nightmare too, who remembers google desktop, basically it was able to surface all sorts across a corporate network, in from restricted areas.
You don't have to be 100% confident of the data though. You just need to spend less man hours on checking the data, than you would have done collecting it.
Depends how much it costs though, some of these platforms look like they’re going to make Bloomberg terminals look cheap
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
bok_viking
Posts: 497
Joined: Mon Jul 06, 2020 9:46 am

Raggs wrote: Mon Apr 08, 2024 11:34 am
Paddington Bear wrote: Mon Apr 08, 2024 11:30 am
Raggs wrote: Mon Apr 08, 2024 10:50 am

Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Yeah I can see this.

I work in law where a lot of what takes up our time is quite menial compared to what actually adds value. Could AI help with this? Definitely. Will it be more cost effective? Unlikely for a long time. Have I come across any automated system which I would use without checking myself before passing work across the desk of a partner (let alone a Court etc)? Not even close.

More generally, the amount of systems/processes that fall down unless and until you can find a human to talk to are astonishing. There’s a good chance AI changes roles, but I can’t see it meaningfully replacing people across the board
Does that check take longer than doing all the menial work in the first place? And if you have a way to update the model, reporting what was right, what was wrong, and what was lacking, it will get better.
A friend of mine that is a lawyer are making use of AI a lot more these days, in creating drafts, researching laws, etc. If you know how to make proper use of AI in your field it has massive benefits, but of course she still has to check the results, but she says it has freed up her time a lot to focus on more important stuff, she is the head of a legal department for a big multi national corporation. And she tends to work a lot of over time. It is definitely freeing up her time a lot more, but yes it still needs to be checked by someone :lol: There are a lot of people who seems scared to make use of these AI tools though or are very inefficient in using it.

But unfortunately AI is going to affect a lot of jobs and reduce the amount of people required to do a specific job and that list is getting longer and wider. AI with the combination of robotics will affect a lot of jobs in fields like manufacturing, where a lot of companies are already testing it and starting to implement it. It takes automation of businesses to a whole new level.
Ai in a decades time will be a completely different beast to what it is now once it has had years of learning to handle tasks better.

The world's middle class and lower classes will be suffering the most due to these job losses unfortunately i think. Governments will eventually have to come up with ideas of how to handle this further down the line.
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

bok_viking wrote: Mon Apr 08, 2024 3:18 pm
Raggs wrote: Mon Apr 08, 2024 11:34 am
Paddington Bear wrote: Mon Apr 08, 2024 11:30 am

Yeah I can see this.

I work in law where a lot of what takes up our time is quite menial compared to what actually adds value. Could AI help with this? Definitely. Will it be more cost effective? Unlikely for a long time. Have I come across any automated system which I would use without checking myself before passing work across the desk of a partner (let alone a Court etc)? Not even close.

More generally, the amount of systems/processes that fall down unless and until you can find a human to talk to are astonishing. There’s a good chance AI changes roles, but I can’t see it meaningfully replacing people across the board
Does that check take longer than doing all the menial work in the first place? And if you have a way to update the model, reporting what was right, what was wrong, and what was lacking, it will get better.
A friend of mine that is a lawyer are making use of AI a lot more these days, in creating drafts, researching laws, etc. If you know how to make proper use of AI in your field it has massive benefits, but of course she still has to check the results, but she says it has freed up her time a lot to focus on more important stuff, she is the head of a legal department for a big multi national corporation. And she tends to work a lot of over time. It is definitely freeing up her time a lot more, but yes it still needs to be checked by someone :lol: There are a lot of people who seems scared to make use of these AI tools though or are very inefficient in using it.
Looking forward to the first major AI hallucination/unchecked mistake in a big case/transaction that sends the professional indemnity insurance of every firm using this stuff through the roof
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
_Os_
Posts: 2027
Joined: Tue Jul 13, 2021 10:19 pm

I like neeps wrote: Mon Apr 08, 2024 1:29 pm It was the same for the computer though no? Technology always creates different types of jobs.

I have an inherent dislike of AI. Big data pattern matching just takes the joy out of life, reducing us all to hundreds of millions of data points and nothing more. Booooring.
Comparison to the computer looks valid. They started out in businesses as expensive specialist machines which took up an entire room, not many people would've believed you if you told them then what would happen.

Sticking with the same example, media production pre-computers required: type setters/photo setters, draftsmen doing graphic design, photographers, printers. All specialist roles. Corporates had inhouse departments of these people. Computerisation eliminated setters their role was absorbed into the role of graphic designers and printers, graphic designers became far less technically skilled and more generalist, photographers became less technically skilled to the point of usually not being specialists, the role of printer requires less technical skill depending on the machine being used. New web roles were created.

Hard to judge (adjusting for population and economic growth etc), but maybe the net outcome was less people earning a living from media production and substantially more media production. Almost every adult in a developed economy produces media now.

Not totally convinced that process happening again means everyone has a new job. Computerisation turned a lot of goods based jobs into services jobs (why buy a calendar or calculator, when your phone has them etc), maybe can't do that trick twice.
bok_viking
Posts: 497
Joined: Mon Jul 06, 2020 9:46 am

Paddington Bear wrote: Mon Apr 08, 2024 3:26 pm
bok_viking wrote: Mon Apr 08, 2024 3:18 pm
Raggs wrote: Mon Apr 08, 2024 11:34 am

Does that check take longer than doing all the menial work in the first place? And if you have a way to update the model, reporting what was right, what was wrong, and what was lacking, it will get better.
A friend of mine that is a lawyer are making use of AI a lot more these days, in creating drafts, researching laws, etc. If you know how to make proper use of AI in your field it has massive benefits, but of course she still has to check the results, but she says it has freed up her time a lot to focus on more important stuff, she is the head of a legal department for a big multi national corporation. And she tends to work a lot of over time. It is definitely freeing up her time a lot more, but yes it still needs to be checked by someone :lol: There are a lot of people who seems scared to make use of these AI tools though or are very inefficient in using it.
Looking forward to the first major AI hallucination/unchecked mistake in a big case/transaction that sends the professional indemnity insurance of every firm using this stuff through the roof
haha entirely possible, would not be surprised if T&C of many companies are already changing to fit AI into it. But yeah AI can be a great tool to use, as long as you still check the results, a bit like checking the work a entry level data entry clerk/administration person, etc. If you do not check for mistakes you might get burned, whether it was done by human or AI
User avatar
Paddington Bear
Posts: 5234
Joined: Tue Jun 30, 2020 3:29 pm
Location: Hertfordshire

bok_viking wrote: Mon Apr 08, 2024 3:47 pm
Paddington Bear wrote: Mon Apr 08, 2024 3:26 pm
bok_viking wrote: Mon Apr 08, 2024 3:18 pm

A friend of mine that is a lawyer are making use of AI a lot more these days, in creating drafts, researching laws, etc. If you know how to make proper use of AI in your field it has massive benefits, but of course she still has to check the results, but she says it has freed up her time a lot to focus on more important stuff, she is the head of a legal department for a big multi national corporation. And she tends to work a lot of over time. It is definitely freeing up her time a lot more, but yes it still needs to be checked by someone :lol: There are a lot of people who seems scared to make use of these AI tools though or are very inefficient in using it.
Looking forward to the first major AI hallucination/unchecked mistake in a big case/transaction that sends the professional indemnity insurance of every firm using this stuff through the roof
haha entirely possible, would not be surprised if T&C of many companies are already changing to fit AI into it. But yeah AI can be a great tool to use, as long as you still check the results, a bit like checking the work a entry level data entry clerk/administration person, etc. If you do not check for mistakes you might get burned, whether it was done by human or AI
Yep agree with all of that
Old men forget: yet all shall be forgot, But he'll remember with advantages, What feats he did that day
I like neeps
Posts: 3262
Joined: Tue Jun 30, 2020 9:37 am

_Os_ wrote: Mon Apr 08, 2024 3:45 pm
I like neeps wrote: Mon Apr 08, 2024 1:29 pm It was the same for the computer though no? Technology always creates different types of jobs.

I have an inherent dislike of AI. Big data pattern matching just takes the joy out of life, reducing us all to hundreds of millions of data points and nothing more. Booooring.
Comparison to the computer looks valid. They started out in businesses as expensive specialist machines which took up an entire room, not many people would've believed you if you told them then what would happen.

Sticking with the same example, media production pre-computers required: type setters/photo setters, draftsmen doing graphic design, photographers, printers. All specialist roles. Corporates had inhouse departments of these people. Computerisation eliminated setters their role was absorbed into the role of graphic designers and printers, graphic designers became far less technically skilled and more generalist, photographers became less technically skilled to the point of usually not being specialists, the role of printer requires less technical skill depending on the machine being used. New web roles were created.

Hard to judge (adjusting for population and economic growth etc), but maybe the net outcome was less people earning a living from media production and substantially more media production. Almost every adult in a developed economy produces media now.

Not totally convinced that process happening again means everyone has a new job. Computerisation turned a lot of goods based jobs into services jobs (why buy a calendar or calculator, when your phone has them etc), maybe can't do that trick twice.
Yes my point around the computer was that it created whole industries previously not possible and I'm sure AI will ultimately.

At the end of the day AI will be completely valueless if it means huge unemployment. Nobody for Alphabet to sell ads to, nobody for Prime to sell products to or companies to host in AWS, nobody is buying Microsoft products when they are no staff, nobody is buying a tesla, and nobody has any money to invest/pensions/401ks to keep the stock market frothy etc etc. It's really not in the capitalist elites interests to actually have AI replace people.
User avatar
JM2K6
Posts: 9021
Joined: Wed Jul 01, 2020 10:43 am

Raggs wrote: Mon Apr 08, 2024 10:45 am
C T wrote: Mon Apr 08, 2024 9:27 am More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
It's only going to get better, and probably quickly. I'd be very careful of looking into any career that can be supplemented with AI, unless you have a serious passion for it.

If humanity does it right, it's a huge positive. As it won't, it's going to cause issues.
It's a massive assumption that it's only going to get better. There are fundamental flaws with large language models in particular. There are strong arguments being made that this is largely as good as it's going to get with the current approach. And that's before you scratch the surface of the vast amounts of illegality involved in the training sets used for these AIs, or the sheer cost and computational power required for something with genuinely limited usefulness, or start to consider that they are actually getting worse because they are beginning to absorb output produced by other AIs.

Recommended reading:

https://www.wheresyoured.at/peakai/

https://www.wheresyoured.at/bubble-trouble/
_Os_
Posts: 2027
Joined: Tue Jul 13, 2021 10:19 pm

I like neeps wrote: Mon Apr 08, 2024 5:15 pm Yes my point around the computer was that it created whole industries previously not possible and I'm sure AI will ultimately.

At the end of the day AI will be completely valueless if it means huge unemployment. Nobody for Alphabet to sell ads to, nobody for Prime to sell products to or companies to host in AWS, nobody is buying Microsoft products when they are no staff, nobody is buying a tesla, and nobody has any money to invest/pensions/401ks to keep the stock market frothy etc etc. It's really not in the capitalist elites interests to actually have AI replace people.
There's three categories of job which will be impacted last or not at all.

Those that require a licence. These are areas where society prefers qualified humans making the mistakes, or they're state controlled, or they're powerful and have put up barriers to entry. Drivers/pilots/medical/law/accounting/teachers/police/military fall into this category. Truck drivers are the canary, you need a licence not just anyone can rock up and start driving huge vehicles, there's been a lot of talk and investments made into replacing them. Truck drivers still exist.

Those that require in person human to human contact. Some medical (nurses and carers)/entertainment/teaching (AIs tailored to the needs of each student could do a lot of leg work, but children still need the teacher)/police (arresting bad guys)/soldiers (sticking bayonets into bad guys).

Trades. Some overlap with the first group, you wouldn't want anyone rocking up and doing electrics. In their own category because Terminators don't look close to replacing physical jobs.

Everything else looks like it gets ravaged by AI early on, which is most of the productive economy. Final destination unknown.
User avatar
Sandstorm
Posts: 9500
Joined: Mon Jun 29, 2020 7:05 pm
Location: England

Surely soldiers get replaced by AI before anyone else? Drones are already replacing pilots. You won’t need meat bags in fatigues in the next decade.
_Os_
Posts: 2027
Joined: Tue Jul 13, 2021 10:19 pm

JM2K6 wrote: Mon Apr 08, 2024 5:34 pm
Raggs wrote: Mon Apr 08, 2024 10:45 am
C T wrote: Mon Apr 08, 2024 9:27 am More and more I am finding myself having quite an extreme view on AI, and it is not in favour.

Essentially, what I'm seeing is CEO's frothing at the mouth with an unsaid ambition to half their workforce. Or, if they could, get rid all together.

Then of course, what I'm starting to feel is more and more "pressure" (perhaps too strong a term) to get excited about it myself. Company strategy, bosses etc.

This might all stem from a personal belief that companies should be seeking to employ as many people as they can while still maintaining an acceptable level of profit. Not everyone will agree of course.

Anyway, here's my biggest problem with it. It is crap. CEO's are being sold a giant pig in a poke. You've got the DPD chatbot that was tricked into slagging off DPD. I've seen an example of a team at work getting a massive AI investment and then damn near breaking the company (not through the large investment, but through the subsequent output from the team). Interactions I've had with chat bots.

Saying all this I'm sure there's plenty of examples of where AI is helping people do things better and even saving lives.

Curious to know what others think?
It's only going to get better, and probably quickly. I'd be very careful of looking into any career that can be supplemented with AI, unless you have a serious passion for it.

If humanity does it right, it's a huge positive. As it won't, it's going to cause issues.
It's a massive assumption that it's only going to get better. There are fundamental flaws with large language models in particular. There are strong arguments being made that this is largely as good as it's going to get with the current approach. And that's before you scratch the surface of the vast amounts of illegality involved in the training sets used for these AIs, or the sheer cost and computational power required for something with genuinely limited usefulness, or start to consider that they are actually getting worse because they are beginning to absorb output produced by other AIs.

Recommended reading:

https://www.wheresyoured.at/peakai/

https://www.wheresyoured.at/bubble-trouble/
Thanks JM, they were good, the money put into this is crazy. but this paragraph is the interesting one for me:

"If you focus on the present — what OpenAI's technology can do today, and will likely do for some time — you see in terrifying clarity that generative AI isn't a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people.".

In your view if what's there already is fully capitalised on, then what? I'm wondering how far you think the coding ability has reached.

I've used stable diffusion a lot, it's clunky but powerful, especially when you feed it an image and a prompt (rough drawing of what you want and a text prompt). In another article on media job losses the same author says this:
. Microsoft's one minute Super Bowl ad for its "Copilot" AI assistant showed people asking it things like "write code for my 3D open world game" and "make a logo for my classic truck repair garage Mike's," the latter of which produced four usable images in the commercial. When I tried to replicate this in the real world, Copilot made four images with completely incomprehensible text, only one of which said "Mike's." And while one might come away from the commercial thinking you can do something, it isn't particularly clear what that something is.
https://www.wheresyoured.at/the-anti-economy/
This misses the point a bit. Text is easy to clean up/replace/remove, someone with intermediate skills can do it in less time than I've taken to make this post using an open source graphic editor (GIMP). Generating high quality images without much human input is new and was previously the hard part. I'm guessing Copilot isn't focused solely on images like stable diffusion. There's also a lot of cheap options that deliver a complete logo/brand without needing any knowledge of graphics editing (vectoring/transparencies/text, all done). You can play with these ones for free:
https://looka.com/
https://www.logoai.com/
User avatar
JM2K6
Posts: 9021
Joined: Wed Jul 01, 2020 10:43 am

_Os_ wrote: Mon Apr 08, 2024 8:25 pm
JM2K6 wrote: Mon Apr 08, 2024 5:34 pm
Raggs wrote: Mon Apr 08, 2024 10:45 am

It's only going to get better, and probably quickly. I'd be very careful of looking into any career that can be supplemented with AI, unless you have a serious passion for it.

If humanity does it right, it's a huge positive. As it won't, it's going to cause issues.
It's a massive assumption that it's only going to get better. There are fundamental flaws with large language models in particular. There are strong arguments being made that this is largely as good as it's going to get with the current approach. And that's before you scratch the surface of the vast amounts of illegality involved in the training sets used for these AIs, or the sheer cost and computational power required for something with genuinely limited usefulness, or start to consider that they are actually getting worse because they are beginning to absorb output produced by other AIs.

Recommended reading:

https://www.wheresyoured.at/peakai/

https://www.wheresyoured.at/bubble-trouble/
Thanks JM, they were good, the money put into this is crazy. but this paragraph is the interesting one for me:

"If you focus on the present — what OpenAI's technology can do today, and will likely do for some time — you see in terrifying clarity that generative AI isn't a society-altering technology, but another form of efficiency-driving cloud computing software that benefits a relatively small niche of people.".

In your view if what's there already is fully capitalised on, then what? I'm wondering how far you think the coding ability has reached.
I don't think there is another leap forward for the tech. I think it's very telling that they've moved to 30 second videos that look an awful lot like a mashup of every single "First person walk through Tokyo in 4K in the rain" video that exists on YouTube, because it's easier to point to the new shiny thing and make wild claims than it is to turn the existing text and image based technology into something genuinely useful.

Because the tech has no actual understanding of what it is doing, but it is capable of fooling people at a superficial level, it is being treated as being far more impressive than it is. In essence, generative AI is basically one absolutely gigantic fuckoff database with the ability to lookup a whole bunch of keywords from millions of examples and then take a guess at what the pattern should look like based off the dataset. There is no real AI here; it's just a big algorithm with a big dataset drawing from other people's work. This is why you can easily replicate, I dunno, a Mario Bros or Marvel or anime image, but it can't actually create less popular styles in any meaningfully useful way. Yes, it can generate something that is convincing enough at first glance if you don't look too hard, but for professional usage it is a million miles away from a good coherent standard and requires just as much work to clean up as it would to get someone to make a better one in the first place (this is true for video and image, slightly less true for text). Much like crypto and NFTs, it's a solution in search of a problem.

It is absolutely incredible to me that OpenAI et al are hawking this shit around Hollywood studios. Obviously studio execs are exactly the sort of credulous morons who think they can make money by replacing creatives with this slop, but there is zero chance it will replace the real thing in any meaningful way. The fact that this technology is commercially aimed at companies who want to save on the cost of creative people (and who don't understand what it takes to create that stuff in the first place) is another massive red flag.
I've used stable diffusion a lot, it's clunky but powerful, especially when you feed it an image and a prompt (rough drawing of what you want and a text prompt). In another article on media job losses the same author says this:
. Microsoft's one minute Super Bowl ad for its "Copilot" AI assistant showed people asking it things like "write code for my 3D open world game" and "make a logo for my classic truck repair garage Mike's," the latter of which produced four usable images in the commercial. When I tried to replicate this in the real world, Copilot made four images with completely incomprehensible text, only one of which said "Mike's." And while one might come away from the commercial thinking you can do something, it isn't particularly clear what that something is.
https://www.wheresyoured.at/the-anti-economy/
This misses the point a bit. Text is easy to clean up/replace/remove, someone with intermediate skills can do it in less time than I've taken to make this post using an open source graphic editor (GIMP). Generating high quality images without much human input is new and was previously the hard part. I'm guessing Copilot isn't focused solely on images like stable diffusion. There's also a lot of cheap options that deliver a complete logo/brand without needing any knowledge of graphics editing (vectoring/transparencies/text, all done). You can play with these ones for free:
https://looka.com/
https://www.logoai.com/
Copilot is essentially trying to be StackOverflow for code. I now work for Very Big Corporation(tm) and it's installed by default. The junior engineers are relatively happy with it. The senior ones are aghast at what it's doing, including "baking in" mistakes by producing bad/incorrect/insecure/dangerous code while giving an air of authority.

StableDiffusion is fine as a toy but every company that has tried to use it or something similar for actual product has been laughed out of town, because the output is always dreadful. It is the opposite of high quality. What it has going for it is that it's relatively quick and "cheap" to the end user. It's not actually cheap, though; that's why there is such a rush to get VC money involved (and their track record is pretty funny) and why they're hunting for the big contracts before the bills come due, while they cross their fingers and hope the lawsuits don't happen.

Agreed that text is easier to clean up, but the technology is prone to adding in absolute bullshit - convincing bullshit, but bullshit nonetheless - and those hallucinations are baked into the design. AI text is also written in an extremely obvious way, and it's funny that this is only getting observably worse as these models continue to be trained on the public internet, which is filling up with AI written slop. It is useful to a degree for ESL people and I won't deny it can act as a pretty good prompt on its own, but it is not worth the frankly nutso power requirements given what we're doing to the planet already. And I won't shy away from the legal aspect; it is genuinely disgraceful that these companies are trying to defend their behaviour by claiming that they wouldn't be able to afford to produce this tech if they had to licence the training data. Yeah, no shit guys.


What annoys me about a lot of this is that it's poisoning the well for future actual-AI products. AI (or even just fancy algorithms) does actually have a lot of useful applications, because there are some things that it can already do pretty well, and outside of generative AI there are all kinds of progressive applications. And uh military ones as well I guess (hi Israel) but that was inevitable. All the focus being on the actually pretty useless genAI, which IMO will inevitably crash and burn, means less funding for more useful AI and far more gunshy investors in future. The fact that the same people who were previously pushing crypto and then NFTs are the same boosting this tech should make anyone take a long hard look at what the technology actually offers - and at what cost.
Jethro
Posts: 270
Joined: Wed Aug 25, 2021 3:09 am

Raggs wrote: Mon Apr 08, 2024 10:50 am
Paddington Bear wrote: Mon Apr 08, 2024 10:48 am I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Raggs back in the 1960s the claim was the COBOL programming language would allow Managers to write systems and you wouldn't need those pesky nerds down in the basement, that turned out not to be entirely correct.

What they are calling AI now a days is media driven, the term for what we have is "Expert Systems" (yeah not as sexy for sure), or as I like to call it smoke and mirrors (code all possible answers to a question and follow a decision path, it the answer isn't what you want ask the question in a different way till you get an answer your system can interpret using something called "fuzzy logic".

My niece is currently studying AI, and is that ever complex, she reckons a few research orgs have true AI but at a very fundamental level.

The computer apocalypse is quite a way off folks.
User avatar
Uncle fester
Posts: 3476
Joined: Mon Jun 29, 2020 9:42 pm

Every new tech innovation seems to extend, not shorten my working hours. I remain skeptical that AI is going to put us all out of work.

And if it does, we're going to have to come up with a very different economic model. The consumer economy needs people with excess cash to keep it going so that means people with reasonably well paid jobs.

Iain M Banks post-scarcity Culture series might get a run out yet.
User avatar
JM2K6
Posts: 9021
Joined: Wed Jul 01, 2020 10:43 am

Jethro wrote: Tue Apr 09, 2024 1:35 am
Raggs wrote: Mon Apr 08, 2024 10:50 am
Paddington Bear wrote: Mon Apr 08, 2024 10:48 am I remain quite sceptical. Most automated processes are totally unsatisfactory and require humans to circumvent. I don’t see that changing massively.
Like I said, I don't think any career goes extinct. But 1 human can oversee 5 AI conversations, only having to intervene here and there, rather than needing humans for each one etc.

Programmers are saying that AI can do a huge amount of coding for them, but can't do everything, making them massively more efficient as they don't have to worry about as much "busy work". It can also scan for bugs/errors faster etc. You'll still need programmers, but nowhere near as many to produce the same volume of work.
Raggs back in the 1960s the claim was the COBOL programming language would allow Managers to write systems and you wouldn't need those pesky nerds down in the basement, that turned out not to be entirely correct.

What they are calling AI now a days is media driven, the term for what we have is "Expert Systems" (yeah not as sexy for sure), or as I like to call it smoke and mirrors (code all possible answers to a question and follow a decision path, it the answer isn't what you want ask the question in a different way till you get an answer your system can interpret using something called "fuzzy logic".

My niece is currently studying AI, and is that ever complex, she reckons a few research orgs have true AI but at a very fundamental level.

The computer apocalypse is quite a way off folks.
I do not know many good programmers saying AI can do a lot of work for them, and I know a lot of programmers
User avatar
ASMO
Posts: 5250
Joined: Mon Jun 29, 2020 6:08 pm

Uncle fester wrote: Tue Apr 09, 2024 6:45 am Every new tech innovation seems to extend, not shorten my working hours. I remain skeptical that AI is going to put us all out of work.

And if it does, we're going to have to come up with a very different economic model. The consumer economy needs people with excess cash to keep it going so that means people with reasonably well paid jobs.

Iain M Banks post-scarcity Culture series might get a run out yet.
In its current iteration i completely agree, but once we crack the strong AI model, all bets are off, but i don't think that's going to happen in my lifetime.
_Os_
Posts: 2027
Joined: Tue Jul 13, 2021 10:19 pm

JM2K6 wrote: Mon Apr 08, 2024 9:59 pm I don't think there is another leap forward for the tech. I think it's very telling that they've moved to 30 second videos that look an awful lot like a mashup of every single "First person walk through Tokyo in 4K in the rain" video that exists on YouTube, because it's easier to point to the new shiny thing and make wild claims than it is to turn the existing text and image based technology into something genuinely useful.

Because the tech has no actual understanding of what it is doing, but it is capable of fooling people at a superficial level, it is being treated as being far more impressive than it is. In essence, generative AI is basically one absolutely gigantic fuckoff database with the ability to lookup a whole bunch of keywords from millions of examples and then take a guess at what the pattern should look like based off the dataset. There is no real AI here; it's just a big algorithm with a big dataset drawing from other people's work. This is why you can easily replicate, I dunno, a Mario Bros or Marvel or anime image, but it can't actually create less popular styles in any meaningfully useful way. Yes, it can generate something that is convincing enough at first glance if you don't look too hard, but for professional usage it is a million miles away from a good coherent standard and requires just as much work to clean up as it would to get someone to make a better one in the first place (this is true for video and image, slightly less true for text). Much like crypto and NFTs, it's a solution in search of a problem.

It is absolutely incredible to me that OpenAI et al are hawking this shit around Hollywood studios. Obviously studio execs are exactly the sort of credulous morons who think they can make money by replacing creatives with this slop, but there is zero chance it will replace the real thing in any meaningful way. The fact that this technology is commercially aimed at companies who want to save on the cost of creative people (and who don't understand what it takes to create that stuff in the first place) is another massive red flag.
I don't understand what the use case is for the models drawing from the gigantic general databases, I knew the money being thrown at it was huge (but didn't realise how much until I read the articles you posted). The amount of money thrown at it doesn't justify what the actual potential use cases are at the moment. Are they just trying to make the biggest general databases possible and hoping AGI magically happens?

They want to eliminate as many service sector white collar jobs as possible, fuck knows why, but that's what they want. But a general tool isn't going to be capable of doing that.
JM2K6 wrote: Mon Apr 08, 2024 9:59 pm Copilot is essentially trying to be StackOverflow for code. I now work for Very Big Corporation(tm) and it's installed by default. The junior engineers are relatively happy with it. The senior ones are aghast at what it's doing, including "baking in" mistakes by producing bad/incorrect/insecure/dangerous code while giving an air of authority.

StableDiffusion is fine as a toy but every company that has tried to use it or something similar for actual product has been laughed out of town, because the output is always dreadful. It is the opposite of high quality. What it has going for it is that it's relatively quick and "cheap" to the end user. It's not actually cheap, though; that's why there is such a rush to get VC money involved (and their track record is pretty funny) and why they're hunting for the big contracts before the bills come due, while they cross their fingers and hope the lawsuits don't happen.

Agreed that text is easier to clean up, but the technology is prone to adding in absolute bullshit - convincing bullshit, but bullshit nonetheless - and those hallucinations are baked into the design. AI text is also written in an extremely obvious way, and it's funny that this is only getting observably worse as these models continue to be trained on the public internet, which is filling up with AI written slop. It is useful to a degree for ESL people and I won't deny it can act as a pretty good prompt on its own, but it is not worth the frankly nutso power requirements given what we're doing to the planet already. And I won't shy away from the legal aspect; it is genuinely disgraceful that these companies are trying to defend their behaviour by claiming that they wouldn't be able to afford to produce this tech if they had to licence the training data. Yeah, no shit guys.

What annoys me about a lot of this is that it's poisoning the well for future actual-AI products. AI (or even just fancy algorithms) does actually have a lot of useful applications, because there are some things that it can already do pretty well, and outside of generative AI there are all kinds of progressive applications. And uh military ones as well I guess (hi Israel) but that was inevitable. All the focus being on the actually pretty useless genAI, which IMO will inevitably crash and burn, means less funding for more useful AI and far more gunshy investors in future. The fact that the same people who were previously pushing crypto and then NFTs are the same boosting this tech should make anyone take a long hard look at what the technology actually offers - and at what cost.
This is where it gets interesting. As you say those at the top are aghast, but those at the bottom are happy with it. It's because one can discern quality more clearly in their area of expertise than the other. But sometimes value is subjective and not mission critical, being perfect is sometimes just over engineering. I've seen this with friends working in design, they absolutely would not agree with me that an AI logo generator is basically at the level required for professional commercial use "that's fucking clip art!", because they can discern the difference in quality. The thing is most people cannot and they're the ones looking at the branding. So does an SME spend five figures on something professional or use something <£100, ironically it's the low cost which may turn them off rather than the quality.

Thing with stable diffusion is it's open source and you can install a local instance, mid range desktop and GPU is good enough. You can then create your own custom training data, eg your own work that you've tagged up. Negative prompts to remove the typical AI slop. You can then create 100s of variations in different styles and select the ones you want (similar to selecting the photos you want from a photo shoot, you only want some good ones, most go into the bin). There is a masking tool for redoing artefacts. Where that could fit into a workflow is in the idea formation phase before the final product, quantity as well as quality is important at that stage, something that otherwise takes one person creating variations weeks to do (or more than one person in correspondingly less time). Most of what is created in design studios/creative industries/etc, never leaves the building and is only seen by the people working there.

I have no experience with text AI, other than playing with GPT. Couldn't find anything open source I could make my own training data for (proprietary options are too expensive). From playing with GPT it cannot combine two abstract concepts to come up with something new (requests for text "in the style of" aren't abstractions because the styles are concrete examples in the model), because it's dumb as you say. The Israeli military AI situation is a good example of why it cannot replace a real analyst, "Western aid convoy + maybe one low level Hamas guy in there = bomb because within tolerances for targeting Hamas", isn't something a human would come up with (dystopian that the defence is "I followed the orders the AI gave me", but then they're low level and maybe couldn't discern what was quality targeting information and what wasn't). I could see it being useful in any situation where the input text is closely followed and improved, low noise tolerance probably means hallucinations are eliminated. I was wondering if something existed like that for code (it seems Copilot is that?), basically an advanced autofill.
sefton
Posts: 705
Joined: Mon Jun 29, 2020 8:00 pm

ASMO wrote: Tue Apr 09, 2024 7:45 am
Uncle fester wrote: Tue Apr 09, 2024 6:45 am Every new tech innovation seems to extend, not shorten my working hours. I remain skeptical that AI is going to put us all out of work.

And if it does, we're going to have to come up with a very different economic model. The consumer economy needs people with excess cash to keep it going so that means people with reasonably well paid jobs.

Iain M Banks post-scarcity Culture series might get a run out yet.
In its current iteration i completely agree, but once we crack the strong AI model, all bets are off, but i don't think that's going to happen in my lifetime.
So the next 5 years then.
Post Reply