• 0 Posts
  • 80 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle








  • Just kind of dawned on me while looking at the number, Reddit’s licensing deal with Google is valued at $60 million per year. That’s really not very much money at all, considering the amount of data Reddit has and continues to accumulate. And chump change for Google, no doubt. Reveals how little leverage Reddit actually has at this point. This was their flagship deal, and the best they could get was $60mil per year.

    Also puts the API fiasco in a new light. “Look, we need to charge for API calls, because we need to restrict public access to data as a precondition of selling all your shit in a few months to Google, for the financial equivalent of a cup of coffee.”




  • Interesting. I’m curious to know more about what you think of training datasets. Seems like they could be described as a stored representation of reality that maybe checks the boxes you laid out. It’s a very different structure of representation than what we have as animals, but I’m not sure it can be brushed off as trivial. The way an AI interacts with a training dataset is mechanistic, but as you describe, human worldviews can be described in mechanistic terms as well (I do X because I believe Y).

    You haven’t said it, so I might be wrong, but are you pointing to freewill and imagination as somehow tied to intelligence in some necessary way?


  • Thanks! I’m not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?

    For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn’t, is not obvious to me. So that’s why I’m engaging here, it sounds like you’ve put a lot of thought into an answer. But I’m not sure I understand your terms.






  • I think this type of anthropocentrism extends to chess too actually. I’m not an expert on the subject, but I’ve heard that chess AIs are finding success doing unintuitive things like pushing a and h file pawns in openings. If, 10 years ago, some chess grandmaster was doing the same thing and finding success, I imagine they would have been seen as creative, maybe even groundbreaking.

    I think the average person under-rates the sophistication of AI. Maybe as a response to the AI hype. Maybe it’s because we’re scared of AI, and it’s comforting to believe that it’s operations are trivial. I see irrationality and anger cropping up in discussions of AI that I think stem from a fundamental fear of its transformative power.