Google launches Gemini—a powerful AI model it says can surpass GPT-4

deet

Ars Praefectus
3,272
Subscriptor++
Looking at the logo image, it seems that Google has really taken a page out of Apple's branding playbook. I could easily see this as a slide at an Apple presentation. Except for the color gradient in the letters, perhaps.
Interesting choice to have the threads come out on the right the same as they go in on the left. Great signifier for activity without progress, though.

Now, is Gemini better because it's a better model, or because Google can train it on more data than OpenAI can scrape?
 
Upvote
62 (65 / -3)
Looking at the logo image, it seems that Google has really taken a page out of Apple's branding playbook. I could easily see this as a slide at an Apple presentation. Except for the color gradient in the letters, perhaps.

I see a Chinese Finger Trap. Bad times at the carnival. Lots of people you thought you knew, getting stuck.
 
Upvote
54 (54 / 0)

stormcrash

Ars Tribunus Angusticlavius
6,157
Looking at the logo image, it seems that Google has really taken a page out of Apple's branding playbook. I could easily see this as a slide at an Apple presentation. Except for the color gradient in the letters, perhaps.
I think the Apple vibe is coming from being on a dark background. Normally Google favors a white background in presentations and logos. If I picture it in the inverse it doesn't seem "clean" enough for an Apple logo, and looks much more Google-esque in my mind at least
 
Upvote
15 (15 / 0)

dragonfliet

Smack-Fu Master, in training
60
At this point, I just wish that they would log every piece of generated text and sell it to Universities/Turnitin so we could help cut down the absolute DELUGE of generated mediocrity. So many faculty have no idea how to handle this, and the students are not only generating bad work, but are hamstringing themselves pretty hardcore.
 
Upvote
5 (15 / -10)

HandymanHandy

Wise, Aged Ars Veteran
164
So Ultra is GPT-4 level but is not available until next year, and is English only at the moment.
Pro is GPT-3.5 level and is available today in Bard.

That tells me Google is basically 1 year behind OpenAI at production scale, since ChatGPT with 3.5 launched last year this time, and GPT 4 was being used in Bing by February.
 
Upvote
57 (63 / -6)

ChaosFenix

Wise, Aged Ars Veteran
166
I feel for us to actually rely on this stuff for everyday tasks these percentages need to get into the five-9 territory we require for a lot of other IT infrastructure. Until it does I feel it will need a human operator in the loop to make a final decision or clean up the results. It isn't to say that it can't be useful. Just that it can't just be relied on in the same way we rely on a computer or the internet.
 
Upvote
15 (17 / -2)
At this point, I just wish that they would log every piece of generated text and sell it to Universities/Turnitin so we could help cut down the absolute DELUGE of generated mediocrity. So many faculty have no idea how to handle this, and the students are not only generating bad work, but are hamstringing themselves pretty hardcore.

No solution, btw. This is what obsolescence of a model of "knowledge" looks like. An epistemic paradigm shift. Buckle up!

This is happening all across grade school as well. And the system is not set up for teachers in, say, 9th grade, to screen, and so it's ... just happening. Homework assignments are all meeting spec. No one knows exactly what to do. This from my friend in the NYC school district. Deer in headlight.

In college it is of course the same but doubled by the cleverness of the students, and how tricks spread through student bodies. Large scale denial.

So, it is bad right now. And no solution anyone in education gets paid enough to toil over.

Next year will be this year +1, next year +2, and so on. Students coming into first year college classes in 2025 will have a +2 AI off-load factor. 2026, +3, etc.

We have no broad-spread idea what this is going to do to cognitive development, and what pedagogy needs to now look like.

Buckle up indeed. Everyone reading this article is old tech. In 5 years, what will a college student look like? A high schooler?
 
Upvote
75 (75 / 0)

Bongle

Ars Praefectus
4,051
Subscriptor++
No solution, btw. This is what obsolescence of a model of "knowledge" looks like. An epistemic paradigm shift. Buckle up!

This is happening all across grade school as well. And the system is not set up for teachers in, say, 9th grade, to screen, and so it's ... just happening. Homework assignments are all meeting spec. No one knows exactly what to do. This from my friend in the NYC school district. Deer in headlight.

In college it is of course the same but doubled by the cleverness of the students, and how tricks spread through student bodies. Large scale denial.

So, it is bad right now. And no solution anyone in education gets paid enough to toil over.

Next year will be this year +1, next year +2, and so on. Students coming into first year college classes in 2025 will have a +2 AI off-load factor. 2026, +3, etc.

We have no broad-spread idea what this is going to do to cognitive development, and what pedagogy needs to now look like.

Buckle up indeed. Everyone reading this article is old tech. In 5 years, what will a college student look like? A high schooler?
It's an interesting problem.

Potentially it will devalue the value of a college degree. "A degree? so you managed to take out a lot of loans and spam your profs with ChatGPT for 4 years. What else you got?".

Or maybe college degrees need to get enormously harder to attain, if every student now basically has a expert-in-all-fields a browser tab away. Less "write a simple game to demonstrate your knowledge of CPU architecture", more "make a CPU".

It may also increase a premium a proven ability to make projects/products. Rather than a degree, show off a thing you made, even with the assistance of AI tools. When I was hiring interns and co-ops, side projects were already a top-value thing to see on a resume.
 
Upvote
44 (44 / 0)

fredrum

Ars Scholae Palatinae
773
Whelp, I guess their panic rush solution of Bard is dead/dying then. After that botch job why exactly should we believe them on this one, or why would anyone want to use it when it might get killed just as fast?

Because this is the real thing! They actually spared us the inbetween models,

B ard
C urmudgeon
D oughnut
E vildoer
F akenews
G emini (wooo! so good)
 
Upvote
-3 (5 / -8)

UseServ

Smack-Fu Master, in training
4
So Google admits that Gemini is still worse than ChatGPT right now, but sometime next year, the "Ultra" version might me fractionally better than what ChatGPT is now. Cool. Great.

Very much so. Their own benchmarks show that the version running on Bard falls way short of GPT-4 and even is outshone by PaLM 2 on many measures.

image
 
Upvote
40 (40 / 0)

cadence

Ars Scholae Palatinae
943
Subscriptor++
Looks like Pro is slightly better than GPT-3.5, and Ultra will be slightly better than GPT-4. That would be pretty good if they were releasing Ultra today to everybody. But if it's going to take them a few more months, then OpenAI might have enough time to release a tuned up version of GPT-4 to match or surpass Ultra before it is even available to the public.

I can't try the new Bard to form my own opinion about its capabilities, because I'm in Canada and it is not available here. Hopefully, Google will make it available here once Ultra becomes part of Bard some time next year.

Overall, there is lots of promise here, but I learned to be very skeptical of marketing videos from mega corporations. We will have to wait for Ultra public availability and results of independent tests by 3rd parties.
 
Upvote
15 (15 / 0)

dragonfliet

Smack-Fu Master, in training
60
No solution, btw. This is what obsolescence of a model of "knowledge" looks like. An epistemic paradigm shift. Buckle up!

This is happening all across grade school as well. And the system is not set up for teachers in, say, 9th grade, to screen, and so it's ... just happening. Homework assignments are all meeting spec. No one knows exactly what to do. This from my friend in the NYC school district. Deer in headlight.

In college it is of course the same but doubled by the cleverness of the students, and how tricks spread through student bodies. Large scale denial.

So, it is bad right now. And no solution anyone in education gets paid enough to toil over.

Next year will be this year +1, next year +2, and so on. Students coming into first year college classes in 2025 will have a +2 AI off-load factor. 2026, +3, etc.

We have no broad-spread idea what this is going to do to cognitive development, and what pedagogy needs to now look like.

Buckle up indeed. Everyone reading this article is old tech. In 5 years, what will a college student look like? A high schooler?
I mean, it's absolutely not a solution, but you would be surprised how much of absolute trash we're currently swamped in that it would gut.

I don't think that most people are against the use of tech like GPT as a tool for better/easier comprehension and mastery of knowledge. That would be great. In the same way, easy access to information on the internet both created huge problems with plagiarism, but also created a number of ways to dramatically expand instant knowledge in really awesome ways.

The problem isn't that people combine GPT with thought and expertise, the problem is that they just take it as rote "truth" and spam it out. The result is I'm failing a bunch of people (5-10 students a semester) that think that because GPT detectors are useless (they really are) that we can't catch it, and a lot of other people I can absolutely tell are using it, but they lack the easy tell-tales I prove, and so are passing through with poor knowledge, poor critical thinking, etc., etc. and very low grades. You would think that the low grades would get them to fix things, but at this point, they're on to the next course, which requires further deep thinking, and they're sort of in too-deep to stop, and so they just keep floundering. Part of this is absolutely residual pandemic grade/knowledge depression, but it's a bad cycle for them to get into.

Tools like turnitin can't stop people from smartly plagiarizing from internet sources, it DOES cut down on people that are dumbly plagiarizing. The smart ones will be clever and work around things, and that's annoying, but whatever--they really are critically engaging with texts in a way that is actually nearly "honest." If we could at least log GPT generations, and have that tool out there, we could cut down on so much absolute worthless dreck, and force those students back into some small consideration. Because students are smart, and they will always take shortcuts--because that is smart!--but when those shortcuts undermine their growth is when it becomes a problem. The people who use tools like GPT smartly, not to write essays for them, but as a tool for bouncing ideas, mining for broader similarities, etc. and then bringing this back to their own intelligence and bounding forward? Hell yes, that's awesome.

Because the thing is, college is not designed to generate essays. Essays are, unfortunately, simply one of the better ways (not the best, nor the only, etc. etc.) to "objectively" train and evaluate students in deep, complex thinking. We can switch to all in-person writing, but that is a shallow representation of the ability to generate texts in a tiny time period with limited resources--still useful, but not not as useful and important for complex thought as talking at length and in depth about a subject, referencing as many sources as necessary to really get to the heart of thing thing.
 
Upvote
50 (51 / -1)
For now, Google hopes that Gemini will be the opening salvo in a new chapter of the battle to control AI assistants in the future, opposing firms like Anthropic, Meta, and the in-tandem duo of Microsoft and OpenAI.
Where is Amazon in all this? Are they not participating in the race? Are they in stealth? Am I just missing the coverage? Are they just so far behind that no one cares yet? A quick web search suggests the last is true.
 
Upvote
12 (12 / 0)

reimu240p

Smack-Fu Master, in training
60
No solution, btw. This is what obsolescence of a model of "knowledge" looks like. An epistemic paradigm shift. Buckle up!

This is happening all across grade school as well. And the system is not set up for teachers in, say, 9th grade, to screen, and so it's ... just happening. Homework assignments are all meeting spec. No one knows exactly what to do. This from my friend in the NYC school district. Deer in headlight.

In college it is of course the same but doubled by the cleverness of the students, and how tricks spread through student bodies. Large scale denial.

So, it is bad right now. And no solution anyone in education gets paid enough to toil over.

Next year will be this year +1, next year +2, and so on. Students coming into first year college classes in 2025 will have a +2 AI off-load factor. 2026, +3, etc.

We have no broad-spread idea what this is going to do to cognitive development, and what pedagogy needs to now look like.

Buckle up indeed. Everyone reading this article is old tech. In 5 years, what will a college student look like? A high schooler?
Man...combined with the fact that late Gen Z and Gen Alpha were raised with iPads which have been linked to cognitive issues with younger kids*, it's worrying to say the least. Over-reliance on technology is going to be a very big issue in the next decade (or less) as they graduate and enter adulthood.

*I am aware that the study concludes that it is the quality of the content versus the quantity, but the issue is that giving a kid a tablet is almost always going to result them in watching overstimulating garbage on youtube with ads pushed by algorithms.
 
Upvote
13 (13 / 0)

TetsFR

Ars Scholae Palatinae
775
When you see a super corporate polished video, where a bunch guys pat themselves on the back, and no single demo snap of the actual product in it, you can smell a rat miles away.
Anyway, good attempt google. I feel bad for Demis Hassabi, who's been teasing Gemini hard recently but still playing catch me if you can vs ChatGPT.
 
Upvote
3 (5 / -2)

Sajuuk

Ars Tribunus Angusticlavius
9,148
Man...combined with the fact that late Gen Z and Gen Alpha were raised with iPads which have been linked to cognitive issues with younger kids*, it's worrying to say the least. Over-reliance on technology is going to be a very big issue in the next decade (or less) as they graduate and enter adulthood.

*I am aware that the study concludes that it is the quality of the content versus the quantity, but the issue is that giving a kid a tablet is almost always going to result them in watching overstimulating garbage on youtube with ads pushed by algorithms.

Per the abstract:
We argue that the effects of screen viewing depend mostly on contextual aspects of the viewing rather than on the quantity of viewing. That context includes the behavior of adult caregivers during viewing, the watched content in relation to the child’s age, the interactivity of the screen and whether the screen is in the background or not. Depending on the context, screen viewing can have positive, neutral or negative effects on infants’ cognition.

So less "iPad bad" and more "garbage in, garbage out". That being said, I do understand it's incredibly easy to fall into bad behaviors with devices being readily available everywhere all the time.

edit: apologies, I didn't see your acknowledgement the first time around!
 
Upvote
13 (13 / 0)

lordcheeto

Ars Tribunus Militum
2,864
When you see a super corporate polished video, where a bunch guys pat themselves on the back, and no single demo snap of the actual product in it, you can smell a rat miles away.
Anyway, good attempt google. I feel bad for Demis Hassabi, who's been teasing Gemini hard recently but still playing catch me if you can vs ChatGPT.
A company touting benchmarks is a red flag, too. Makes you wonder how much they were trying to game the benchmarks. It's an LLM for crying out loud. If there's any time to let your product do the talking, it's now.
 
Upvote
8 (8 / 0)
Man...combined with the fact that late Gen Z and Gen Alpha were raised with iPads which have been linked to cognitive issues with younger kids*, it's worrying to say the least. Over-reliance on technology is going to be a very big issue in the next decade (or less) as they graduate and enter adulthood.

*I am aware that the study concludes that it is the quality of the content versus the quantity, but the issue is that giving a kid a tablet is almost always going to result them in watching overstimulating garbage on youtube with ads pushed by algorithms.
Whew, good thing I raised my young'uns with Kindles!
 
Upvote
14 (14 / 0)

JimboJonesJunior

Smack-Fu Master, in training
22
So, it is bad right now. And no solution anyone in education gets paid enough to toil over.

Next year will be this year +1, next year +2, and so on. Students coming into first year college classes in 2025 will have a +2 AI off-load factor. 2026, +3, etc.

We have no broad-spread idea what this is going to do to cognitive development, and what pedagogy needs to now look like.

Buckle up indeed. Everyone reading this article is old tech. In 5 years, what will a college student look like? A high schooler?
Yep - as someone who does a very limited amount of university teaching, at the moment the ai generated assignments I've seen currently range from utterly garbage hallucinated bullshit to mediocre scraping through, but 5 years from now really worries me.

Our particular unit is designed extremely well however, the assessments are "here's something I made, here's how it works, this is what I learned doing it". Any less than that, and I don't see how it will be possible to pick up on the students that have decided to screw themselves over by using chatGPT. I also have the luxury of time to sit down with students, get them to talk through their code and debug it with them, but no way do we have the resources for that level of attention for larger classes
 
Upvote
13 (13 / 0)
I'll have to try this out later on a Powershell script I have. I really don't know Powershell but had a need to write a pretty extensive script that was testing for existence of directories, creating, installing software, etc. Using Bard I had a working script in a couple of hours. One thing Bard kept doing though was mixing in Bash syntax. It took me a bit to figure it out since like I said, I don't know Powershell, but once I fixed those it was fine. Maybe this model will stop doing that.
 
Upvote
2 (2 / 0)