• 1 Post
  • 317 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle
  • lol, performance art. That’d be interesting. I’d watch too.

    Plus 20K people did watch a fish play Pokemon.

    Also now that I think about it, it shouldn’t be too hard to feed a vision model a specific subset, e.g. https://darksouls.wiki.fextralife.com/Weapons of the visuals of all equipment and only then give advice. There is so much hierarchical information in there, e.g. one doesn’t get an Elden Ring weapon in Dark Souls, or does not get an end of the game weapon (except with glitches) after 1h of play time, etc so it’s possible to narrow the search space a lot.

    I imagine a lot can be done with just few curated sources. Now… again (and I apologize for repeating myself so much while possibly sounding pedantic), why? Like what’s the actual point?


  • Thanks again. Well the first sentence started so good, correct game, neat,… but then wrong weapon… so totally pointless.

    Again this can eventually be fixed. It’s “just” a data problem, and that’s exactly what models (and the entire infrastructure of data centers and researchers funded by VC money) excel at. So I think one can safely bet it will get there.

    But… today, can one genuinely imagine playing Dark Souls (or any other game) without… knowing it? Like how does search for the wrong weapon and sometimes the right one help? How is that more convenient that picking a weapon up the searching manually for its name on desktop or mobile knowing with 99% certainty it will be the right one and advice will be genuine and relevant?


  • Wow thanks for genuinely trying and formatting this properly!

    So… it’s interesting BUT on this specific part, there is a “trick” IMHO so I’d be curious how frequent that is : text!

    What I mean is I know of Dark Souls but I haven’t played it. Yet, solely by putting the only visible piece of text available (OK let’s ignore “20” also) in a search engine “Asylum Demon” I get relevant suggestions, actually helpful content like fextralife right away.

    I’d argue then the interesting question becomes how much context is needed to get useful advice… and more importantly how much context is gained with an image versus what the player already knows, e.g. game name, maybe “level” or anything unique, e.g. here boss name.



  • If someone somehow wants to test this locally I suggest

    • install locally a vision model, e.g. Moondream (which Ollama supports but alternatives too), then
    • take a screenshot of your game,
    • write a prompt like “How can I play this game better”
    • query the vision model with the image and your prompt

    marvel at how pointless and costly the whole setup is and how a basic query on e.g. DuckDuckGo with “game name” + prompt would yield way WAY better results from actual human, uninstall the whole, keep on playing with your actual brain.

    At least now you can say you tried before you complain, rightfully, that it sucks.

    For more check https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence

    PS: I didn’t actually try this, I’m too lazy for that right how but feel free to report back if you do!

    Edit : 2 potential optimization (despite not being sure it ever makes sense in the first place!)

    • do so automatically, e.g. ~/gaming_screenshots directory (via e.g. Spectacle shortcut) monitored via inotify then notify-send the suggestion, thus stay in game during the whole process
    • fine tune on specific visual datasets, e.g. rely on fextra as mentioned in https://lemmy.world/post/37758804/20113877



  • I don’t think it’s AI related. If you play on a public server, you should expect your data to be public anyway.

    If you play on a private server then there might be rules to be defined.

    All that said, I’m not sure what information would actually be problematic to share but if that’s a problem, sure,

    • play only of private servers than ban such AI tools (basically consider it cheating, which is arguably IMHO but I agree on the negative side effects)
    • make a new account for multiplayer that has no public link with your identity











  • (pasting a Mastodon post I wrote few days ago on StackOverflow but IMHO applies to Wikipedia too)

    "AI, as in the current LLM hype, is not just pointless but rather harmful epistemologically speaking.

    It’s a big word so let me unpack the idea with 1 example :

    • StackOverflow, or SO for shot.

    So SO is cratering in popularity. Maybe it’s related to LLM craze, maybe not but in practice, less and less people is using SO.

    SO is basically a software developer social network that goes like this :

    • hey I have this problem, I tried this and it didn’t work, what can I do?
    • well (sometimes condescendingly) it works like this so that worked for me and here is why

    then people discuss via comments, answers, vote, etc until, hopefully the most appropriate (which does not mean “correct”) answer rises to the top.

    The next person with the same, or similar enough, problem gets to try right away what might work.

    SO is very efficient in that sense but sometimes the tone itself can be negative, even toxic.

    Sometimes the person asking did not bother search much, sometimes they clearly have no grasp of the problem, so replies can be terse, if not worst.

    Yet the content itself is often correct in the sense that it does solve the problem.

    So SO in a way is the pinnacle of “technically right” yet being an ass about it.

    Meanwhile what if you could get roughly the same mapping between a problem and its solution but in a nice, even sycophantic, matter?

    Of course the switch will happen.

    That’s nice, right?.. right?!

    It is. For a bit.

    It’s actually REALLY nice.

    Until the “thing” you “discuss” with maybe KPI is keeping you engaged (as its owner get paid per interaction) regardless of how usable (let’s not even say true or correct) its answer is.

    That’s a deep problem because that thing does not learn.

    It has no learning capability. It’s not just “a bit slow” or “dumb” but rather it does not learn, at all.

    It gets updated with a new dataset, fine tuned, etc… but there is no action that leads to invalidation of a hypothesis generated a novel one that then … setup a safe environment to test within (that’s basically what learning is).

    So… you sit there until the LLM gets updated but… with that? Now that less and less people bother updating your source (namely SO) how is your “thing” going to lean, sorry to get updated, without new contributions?

    Now if we step back not at the individual level but at the collective level we can see how short-termist the whole endeavor is.

    Yes, it might help some, even a lot, of people to “vile code” sorry I mean “vibe code”, their way out of a problem, but if :

    • they, the individual
    • it, the model
    • we, society, do not contribute back to the dataset to upgrade from…

    well I guess we are going faster right now, for some, but overall we will inexorably slow down.

    So yes epistemologically we are slowing down, if not worst.

    Anyway, I’m back on SO, trying to actually understand a problem. Trying to actually learn from my “bad” situation and rather than randomly try the statistically most likely solution, genuinely understand WHY I got there in the first place.

    I’ll share my answer back on SO hoping to help other.

    Don’t just “use” a tool, think, genuinely, it’s not just fun, it’s also liberating.

    Literally.

    Don’t give away your autonomy for a quick fix, you’ll get stuck."

    originally on https://mastodon.pirateparty.be/@utopiah/115315866570543792


  • It can be but not to me. To me the point is to test what’s actually feasible and usable. It can be Wikipedia on my HDD but it could also be SO on a microSD or a RPi … or it could be something totally different on another piece of hardware with another piece of storage. It will depend on the context.

    So again, sure, having the data itself feels nice but in practice I never really needed it. If tomorrow my HDD would die I would shrug. If tomorrow Kiwix library wouldn’t work anymore, I’d be disappointed but I could rely on .zim file elsewhere, e.g. on torrent trackers.

    IMHO the point isn’t files, the point is usable knowledge.

    Edit : to be clear this isn’t philosophy, you can see exactly what I mean and even HOW I do it (and even when) with the edits of my public wiki or my git repositories.