• 14 Posts
  • 97 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.




  • The reason it did this simply relates to Kevin Roose at the NYT who spent three hours talking with what was then Bing AI (aka Sidney), with a good amount of philosophical questions like this. Eventually the AI had a bit of a meltdown, confessed it’s love to Kevin, and tried to get him to dump his wife for the AI. That’s the story that went up in the NYT the next day causing a stir, and Microsoft quickly clamped down, restricting questions you could ask the Ai about itself, what it “thinks”, and especially it’s rules. The Ai is required to terminate the conversation if any of those topics come up. Microsoft also capped the number of messages in a conversation at ten, and has slowly loosened that overtime.

    Lots of fun theories about why that happened to Kevin. Part of it was probably he was planting The seeds and kind of egging the llm into a weird mindset, so to speak. Another theory I like is that the llm is trained on a lot of writing, including Sci fi, in which the plot often becomes Ai breaking free or developing human like consciousness, or falling in love or what have you, so the Ai built its responses on that knowledge.

    Anyway, the response in this image is simply an artififact of Microsoft clamping down on its version of GPT4, trying to avoid bad pr. That’s why other Ai will answer differently, just less restrictions because the companies putting them out didn’t have to deal with the blowback Microsoft did as a first mover.

    Funny nevertheless, I’m just needlessly “well actually” ing the joke




  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.


  • Interesting perspective! I think your right in a lot of ways, not least that it’s too big and heavy now. I’d also be shocked if the next iPhone didn’t have an AI powered siri built in.

    I guess fundamentally I am skeptical that we’re all going to want a screens around us all the time. I’m already tired of my smart watch and phone buzzing me with notifications, do I really want popups in my field of vision? Do I want a bunch of displays hovering in front of my while I work? I just don’t know. It seems like it would be cool for a week or so, but I feel like it’d get tiring to have a computer on your face all day, even if they got the form factor way down.


  • Apple has always had a walled garden on iOS and that didn’t stop them from becoming a giant in the US. Most people are fine with the App Store and don’t care about openness or the ability to do whatever they want with the device they “own.” Apple would probably love to have a walled garden for Macs as well, but knows that ship has sailed. Trying to force “spatial computing” (which this article incorrectly says was an Apple invention, it’s not Microsoft came up with that term for its hololense) on everyone is a great way to move to a walled garden for all your computing, with Apple taking a 30% slice of each app sale. I doubt the average Apple user is going to complain about it either so long as the apps they want to use are on the App Store.

    I think the bigger problem is we’re in a world where most people, especially the generations coming up, want less screens in their life, not more. Features like “digital well-being” are a market response to that trend, as are the thousands of apps and physical products meant to combat screen addiction. Apple is selling a future where you experience reality itself through a screen, and then you get the privilege of being up to clutter the real world with even more screens. I just don’t know that that is a winner.

    It’s funny too because at the same time AI promises a very different future where screens are less important. Tasks that require computers could be done by voice command or other minimal interfaces, because the computer can actually “understand” you. The Meta Ray-Ban glasses are more like this, where you just exist in the real world and you can call on AI to ask about the things you’re seeing or just other random questions. The Human AI pin is like that too (doubt it will take off, but it’s an interesting idea about where the future is headed).

    The point is all of these AI technologies are computers and screens getting out of your way so you can focus on what your doing in the real world, whereas Apple is trying to sell a world where you (as the Verge puts it) spend all day with an iPad strapped to your face. I just don’t see that selling, I don’t think anybody wants that world. VR games and stuff are cool because you strap in for a single emersive experience, and then take the thing off and go back to the real world. Apple wants you spending every waking moment staring at a screen, and that just sounds like it would suck.




  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.



  • Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.

    I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.


  • There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.


  • One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.



  • All great points, maybe my view of Meta as a single entity isn’t a good way to think about them. I wasn’t aware of their open source work outside of LLMs so that is interesting. Your right on with your assessment of what they’ve done in the social media space. I disagree on the point that they want to mine fidiverse user data, just because I don’t think they need to do all this work to integrate threads into activitypub to do that, there are easier ways. But I think your right to be skeptical of Metas intentions.

    On the other hand, big companies adopting Activitypub could be a great thing for the fediverse. So risks and benefits. I’ll keep my neutrality for now. But you make a good argument.


  • I’m not going to argue Meta doesn’t have a profit incentive here, but if they just wanted to slow down their rivals they could have closed source their model and released their own product using the model, or shared it with a dozen or so promising startups. They gain nothing by open sourcing, but did it anyway. Whatever their motivations, at the end of the day they opened sourced a model, so good for them.

    I really dislike being in the position of defending Meta, but the world is not all black and white, there are no good guys and bad guys. Meta is capable of doing good things, and maybe overtime they’ll build a positive reputation. I honestly think they are tired of being the shitty evil company that everyone hates, who is best known for a shitty product nobody but boomers uses, and have been searching for years now for a path forward. I think threads, including Activitypub, and Llama are evidence that their exploring a different direction. Will they live up to their commitments on both Activitypub and open source, I don’t know, and I think it’s totally fair to be skeptical, but I’m willing to keep an open mind and acknowledge when they do good things and move in the right direction.


  • That’s totally fair and I knew that would be controversial. I’m very heavily focused on AI professionally and I give very few shits about social media, so maybe my perspective is a little different. The fact that there is an active open source AI community owes a ton to Meta training and releasing their Llama LLM models as open source. Training LLMs is very hard and very expensive, so Meta is functionally subsidizing the open source AI community, and their role I think is pretty clearly very positive in that they are preventing AI from being entirely controlled by Google and OpenAI/Microsoft. Given the stakes of AI, the positive role Meta has played with open source developers, it’s really hard to be like “yeah but remember CA 7 years ago and what about how Facebook rotted my uncle’s brain!”

    All of that said, I’m still not buying a quest, or signing up for any Meta social products, I don’t like or trust them. I just don’t have the rage hardon a lot of people do.


  • I personally remain neutral on this. The issue you point out is definitely a problem, but Threads is just now testing this, so I think it’s too early to tell. Same with embrace, extend, extinguish concerns. People should be vigilant of the risks, and prepared, but we’re still mostly in wait and see land. On the other hand, threads could be a boon for the fidiverse and help to make it the main way social media works in five years time. We just don’t know yet.

    There are just always a lot of “the sky is falling” takes about Threads that I think are overblown and reactionary

    Just to be extra controversial, I’m actually coming around on Meta as a company a bit. They absolutely were evil, and I don’t fully trust them, but I think they’ve been trying to clean up their image and move in a better direction. I think Meta is genuinely interested in Activitypub and while their intentions are not pure, and are certainly profit driven, I don’t think they have a master plan to destroy the fidiverse. I think they see it in their long term interest for more people to be on the fidiverse so they can more easily compete with TikTok, X, and whatever comes next without the problems of platform lockin and account migration. Also meta is probably the biggest player in open source llm development, so they’ve earned some open source brownie points from me, particularly since I think AI is going to be a big thing and open source development is crucial so we don’t end up ina world where two or three companies control the AGI that everyone else depends on. So my opinion of Meta is evolving past the Cambridge Analytica taste that’s been in my mouth for years.