sueden.social ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Eine Community für alle, die sich dem Süden hingezogen fühlen. Wir können alles außer Hochdeutsch.

Serverstatistik:

2 Tsd.
aktive Profile

#description

3 Beiträge3 Beteiligte1 Beitrag heute

🗨️ For internet users, who believe, a #description of an ongoing crime against humanity is #racism: you are working against int'l law, humanitarian laws and the freedom of expression. You are therefore against every existing basic right, legal right and historical data of humanity.

Now you have two options: educating yourself by reading a book on law, or getting rid of non-intelligent #algorithms.

Option three: Combine option one and two.

I'm happy to see PixelFed being adopted by many more people and how it will highlight the Fediverse. I also fear that the lovely lovely images posted here on Mastodon with descriptions will go away and will be segrigated to another platform or that people on PixelFed will not pay attention to photo descriptions as much as they do here on mastodon. I hope that people will remain as vigilent there about #acccessibility.

So like I just have to ask. For people that are super critical of AI for accessibility, what do you expect instead? Do you want blind people to have human ... guides or whatever that will narrate the world around you? Do you want humans to describe all your pictures? Videos? Porn? Because that's about the only other option. And you may return with "Well audio description." And I return with "You think people are going to describe every YouTube video out there? Or old TV shows like Dark Shadows?" Because honestly that's what it'd take. If AI were *not* around, if we want *that* kind of access, that's what we'd have to ask, of darn near every sighted human in the world. And I just don't feel comfortable with demanding that of them.

Now, we'll see what Apple does to give us what will hopefully be even better image descriptions. Imagine a 3B model that is made with **high quality** images and description pairs, trained to do nothing but describe images. Apple has done pretty darn good without LLM's so far, so maybe they'll surprise us further. But my goodness, I'd much rather have something that, yes, makes me *feel* included, maybe a tad bit more than it actually *does* include me. And that's for each and every blind person to decide for themselves if they want to use AI for image, and probably soon, video descriptions, and what they're willing to trust with it. But for us to get this much real, human access, I just hope people who are detracting from AI understand that we who use AI are now used to having images described, and well, soon videos. It's just something that I don't think people should just deny quickly.

#AI#accessibility#blind

Okay #alttext enthusiasts.

Here are 5 prompts we are presenting to our vLLM backend to analyze our #memes & #images postings.

The answers will be summarized to be posted at the posts #description .

Are there other #questions we should ask the #ai ?

Q1: Explain this image, be verbose.
Q2: What do we see in this image?
Q3: What is the text in this image? Details matter.
Q4: What is happening in this image?
Q5: Why is this image funny? Be critical.

Alright #mastodon #Fediverse let me know!

CC: #atltextmafia