I accidentally cheesed the spider mama fight because all the baby spiders mobbed the spiritual weapon. Good times…
I accidentally cheesed the spider mama fight because all the baby spiders mobbed the spiritual weapon. Good times…
His description on the character creation screen outright says he’s a vampire. Easy enough to miss if you jump for the custom creator right away, but he’s the first origin character in the list.
Not sure how diablo works but the big difference is you move with the stick rather than clicking where you want to go, and you get a set of action rings instead of bars.
Imo it’s not bad, lets me play away from my desk after sitting at a computer all day for work.
The only thing that really annoys me is picking up small things directly from the overworld… You can scroll selectable items with the d-pad but you have to be within a certain distance and the order of things changes when your character moves to pick something up.
I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.
But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.
My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?
If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?
Honestly, I think part of it is that having an entire community of people suffering depressive symptoms becomes a depressing environment.
I’m sure I heard this in a Brene Brown video, but in order to be able to help someone else, you need to be in the right place yourself. Two empty glasses can’t help fill each other. And most people can’t help an entire community of struggling people, one glass can’t help fill fifty, it’s futile and self damaging to try. It’s why we have professionals that do one on one therapy.
And, this might be unpopular, but I think historically this is why we have priests too. I’m not religious, but I think that community offers that to some people.
Sometimes people need to vent, and some people aren’t lucky enough to be in a position where they can vent to anybody, but I don’t know if diving into a community where you expose yourself to everyone else’s problems too is the solution. Things like addictions counseling are controlled, with professionals at the helm, and often in small spaces, with a prescribed meeting time and an end.
I think you’re conflating “intelligence” with “being smart”.
Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.
Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.
I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.
But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.
In a game that takes dozens of hours to get through? Of course I’m save scumming to get the result I want. If I don’t care about some consequence maybe I’ll let a failure slide but for the big stuff, I’m not starting again and doubling my playtime, I’m usually burnt out on the title by the end of the first run.
That’s odd! I had no issues with the stock Ubuntu install. Installing CUDA on a Windows machine requires WSL2 now, but I didn’t really use it for anything more than that, so I could’ve just not used it enough to find problems. As soon as I finished the semester that required proprietary software, I got rid of Windows entirely though.
IMO, as long as you get comfortable with the basics like navigating directories and moving files, installing and updating software (first through something like apt, compiling stuff manually isn’t necessary at first), and managing some basic bash settings like aliases, you’re pretty much set. At least, from a programmer’s standpoint.
I dunno how well versed OP is in computers overall is the thing. The above is a good baseline, but you need a general understanding of how operating systems work in general to be really comfortable with something like Arch. Like you gotta know what a driver is before you can troubleshoot issues with your hardware, or if you’re managing disks it’s good to have an idea of how filesystems work. But that all comes with experience.
I suggest trying Windows Subsystem for Linux. You’ll get a simpler way to get familiar with the command line, which is the important part if you’re interested in development.
That or dual boot, you don’t need to set aside a large partition for messing around.
I imagine this commercial/personal art dichotomy has existed ever since the first time someone paid for art. Like how there’s always been folk music played around campfires in contrast to the operas and orchestras where the local lord’s funding goes.
Nope, I’m playing DOS2, since that’s been sitting in my steam library for way too long!
THEN maybe I’ll BG3. If my laptop can handle it…
I consider computer RPGs to be more in the vein of tactical RPGs rather than Pokemon/Final Fantasy style turn based RPGs tbh. It’s turn based, but positioning is key. Or, at least they scratch the same itch for me.
And Fire Emblem, XCOM, FF Tactics, etc have never exactly had mind blowing sales.
Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features, DOI 10.1109/TPAMI.2020.2995909
Published in 2020 by the IEEE. https://ieeexplore.ieee.org/document/9097411
Makes me think about how the BBC started a mastodon instance. If the CBC follows their example, then federation changes the relationship with social media, as it’s sort of baked in…?
I’m a fan of “keep it stupid simple” or, as I tell myself at work on the daily, “keep it simple, stupid”!
Oh man, I had no idea about that.
I’ve been using Nova for something close to a decade now I think. I just toss it on my new devices and move on, it’s done what I wanted for ages (mostly for the adjustable grid, but I’m sure there’s some features it has that I’ve stuck with that I just feel are default at this point, stock launcher just feels weird).
I don’t think you need to wait years for user friendly Linux tbh! I recommend checking out Linux Mint. It’s basically designed for people used to Windows and handles the technical stuff for you.
You can do almost everything through the GUI rather than the command line, so for things like updates, it’ll show you a little notification in the corner by the clock like you’re used to, you open up the software manager, and click the update button.
And most software nowadays can either be downloaded through an app store like interface, or by downloading an executable file from a website.
And if you’ve ever used a mac, there’s a time machine equivalent built in (timeshift). So you can set up an automatic backup daily/weekly/etc and if you mess up something, in most cases you can revert back to a point when it wasn’t messed up.
I say give it a shot, you can always go back if it’s not for you! But usability has improved so much in the last few years.
Not the poster you’re replying to, but I’m assuming you’re looking for some sort of source that neural networks generate stuff, rather than plagiarize?
Google scholar is a good place to start. You’d need a general understanding of how NNs work, but it ends up leading to papers like this one, which I picked out because it has neat pictures as examples. https://arxiv.org/abs/1611.02200
What this one is doing is taking an input in the form of a face, and turning it into a cartoon. They call it an emoji, cause it’s based on that style, but it’s the same principle as how AI art is generated. Learn a style, then take a prompt (image or text) and do something with the prompt in the style.
For laymen who might not know how GANs work:
Two AI are developed at the same time. One that generates and one that discriminates. The generator creates a dataset, it gets mixed in with some real data, then that all of that gets fed into the discriminator whose job is to say “fake or not”.
Both AI get better at what they do over time. This arms race creates more convincing generated data over time. You know your generator has reached peak performance when its twin discriminator has a 50/50 success rate. It’s just guessing at that point.
There literally cannot be a better AI than the twin discriminator at detecting that generator’s work. So anyone trying to make tools to detect chatGPT’s writing is going to have a very hard time of it.
Ah, so Seagate still sucks!