Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 23 Posts
  • 3.26K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • There was a famous bug that made it into 95 and 98, a tick counter that caused the system to crash after about a month. It was in there so long because there were so many other bugs causing stability problems that it wasn’t obvious.

    I will say that classic MacOS, which is what Apple was doing at the time, was also pretty unstable. Personal computer stability really improved in the early 2000s a lot. Mac OS X came out and Microsoft shifted consumers onto a Windows-NT-based OS.

    EDIT:

    https://www.cnet.com/culture/windows-may-crash-after-49-7-days/

    A bizarre and probably obscure bug will crash some Windows computers after about a month and a half of use.

    The problem, which affects both Microsoft Windows 95 and 98 operating systems, was confirmed by the company in an alert to its users last week.

    “After exactly 49.7 days of continuous operation, your Windows 95-based computer may stop responding,” Microsoft warned its users, without much further explanation. The problem is apparently caused by a timing algorithm, according to the company.




  • tal@lemmy.todaytoSelfhosted@lemmy.worldWhere to start with backups?
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 hours ago

    If databases are involved they usually offer some method of dumping all data to some kind of text file. Usually relying on their binary data is not recommended.

    It’s not so much text or binary. It’s because a normal backup program that just treats a live database file as a file to back up is liable to have the DBMS software write to the database while it’s being backed up, resulting in a backed-up file that’s a mix of old and new versions, and may be corrupt.

    Either:

    1. The DBMS needs to have a way to create a dump — possibly triggered by the backup software, if it’s aware of the DBMS — that won’t change during the backup

    or:

    1. One needs to have filesystem-level support to grab an atomic snapshot (e.g. one takes an atomic snapshot using something like btrfs and then backs up the snapshot rather than the live filesystem). This avoids the issue of the database file changing while the backup runs.

    In general, if this is a concern, I’d tend to favor #2 as an option, because it’s an all-in-one solution that deals with all of the problems of files changing while being backed up: DBMSes are just a particularly thorny example of that.

    Full disclosure: I mostly use ext4 myself, rather than btrfs. But I also don’t run live DBMSes.

    EDIT: Plus, #2 also provides consistency across different files on the filesystem, though that’s usually less-critical. Like, you won’t run into a situation where you have software on your computer update File A, then does a sync(), then updates File B, but your backup program grabs the new version of File B but then the old version of File A. Absent help from the filesystem, your backup program won’t know where write barriers spanning different files are happening.

    In practice, that’s not usually a huge issue, since fewer software packages are gonna be impacted by this than write ordering internal to a single file, but it is permissible for a program, under Unix filesystem semantics, to expect that the write order persists there and kerplode if it doesn’t…and a traditional backup won’t preserve it the way that a backup with help from the filesystem can.



  • tal@lemmy.todaytoTechnology@beehaw.orgMove Over, ChatGPT
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    In all fairness, while this is a particularly bad case, the fact that it’s often very difficult to safely fiddle with environment variables at runtime in a process, but very convenient as a way to cram extra parameters into a library have meant that a lot of human programmers who should know better have created problems like this too.

    IIRC, setting the timezone for some of the Posix time APIs on Linux has the same problem, and that’s a system library. And IIRC SDL and some other graphics libraries, SDL and IIRC Linux 3D stuff, have used this as a way to pass parameters out-of-band to libraries, which becomes a problem when programs start dicking with it at runtime. I remember reading some article from someone who had been banging into this on Linux gaming about how various programs and libraries for games would setenv() to fiddle with them, and races associated with that were responsible for a substantial number of crashes that they’d seen.

    setenv() is not thread-safe or signal-safe. In general, reading environment variables in a program is fine, but messing with them in very many situations is not.

    searches

    Yeah, the first thing I see is someone talking about how its lack of thread-safety is a problem for TZ, which is the time thing that’s been a pain for me a couple times in the past.

    https://news.ycombinator.com/item?id=38342642

    Back on your issue:

    Claude, being very smart and very good at drawing a straight line between two points, wrote code that took the authentication token from the HTTP request header, modified the process’s environment variables, then called the library

    for the uninitiated - a process’s environment variables are global. and HTTP servers are famously pretty good at dealing with multiple requests at once.

    Note also that a number of webservers used to fork to handle requests — and I’m sure that there are still some now that do so, though it’s certainly not the highest-performance way to do things — and in that situation, this code could avoid problems.

    searchs

    It sounds like Apache used to and apparently still can do this:

    https://old.reddit.com/r/PHP/comments/102vqa2/why_does_apache_spew_a_new_process_for_each/

    But it does highlight one of the “LLMs don’t have a broad, deep understanding of the world, and that creates problems for coding” issues that people have talked about. Like, part of what someone is doing when writing software is identifying situations where behavior isn’t defined and clarifying that, either via asking for requirements to be updated or via looking out-of-band to understand what’s appropriate. An LLM that’s working by looking at what’s what commonly done in its training set just isn’t in a good place to do that, and that’s kinda a fundamental limitation.

    I’m pretty sure that the general case of writing software is AI-hard, where the “AI” referred to by the term is an artificial general intelligence that incorporates a lot of knowledge about the world. That is, you can probably make an AI to program write software, but it won’t be just an LLM, of the “generative AI” sort of thing that we have now.

    There might be ways that you could incorporate an LLM into software that can write software themselves. But I don’t think that it’s just going to be a raw “rely on an LLM taking in a human-language set of requirements and spitting out code”. There are just things that that can’t handle reasonably.


  • I think that the problem will be if software comes out that’s doesn’t target home PCs. That’s not impossible. I mean, that happens today with Web services. Closed-weight AI models aren’t going to be released to run on your home computer. I don’t use Office 365, but I understand that at least some of that is a cloud service.

    Like, say the developer of Video Game X says “I don’t want to target a ton of different pieces of hardware. I want to tune for a single one. I don’t want to target multiple OSes. I’m tired of people pirating my software. I can reduce cheating. I’m just going to release for a single cloud platform.”

    Nobody is going to take your hardware away. And you can probably keep running Linux or whatever. But…not all the new software you want to use may be something that you can run locally, if it isn’t released for your platform. Maybe you’ll use some kind of thin-client software — think telnet, ssh, RDP, VNC, etc for past iterations of this — to use that software remotely on your Thinkpad. But…can’t run it yourself.

    If it happens, I think that that’s what you’d see. More and more software would just be available only to run remotely. Phones and PCs would still exist, but they’d increasingly run a thin client, not run software locally. Same way a lot of software migrated to web services that we use with a Web browser, but with a protocol and software more aimed at low-latency, high-bandwidth use. Nobody would ban existing local software, but a lot of it would stagnate. A lot of new and exciting stuff would only be available as an online service. More and more people would buy computers that are only really suitable for use as a thin client — fewer resources, closer to a smartphone than what we conventionally think of as a computer.

    EDIT: I’d add that this is basically the scenario that the AGPL is aimed at dealing with. The concern was that people would just run open-source software as a service. They could build on that base, make their own improvements. They’d never release binaries to end users, so they wouldn’t hit the traditional GPL’s obligation to release source to anyone who gets the binary. The AGPL requires source distribution to people who even just use the software.


  • I will say that, realistically, in terms purely of physical distance, a lot of the world’s population is in a city and probably isn’t too far from a datacenter.

    https://calculatorshub.net/computing/fiber-latency-calculator/

    It’s about five microseconds of latency per kilometer down fiber optics. Ten microseconds for a round-trip.

    I think a larger issue might be bandwidth for some applications. Like, if you want to unicast uncompressed video to every computer user, say, you’re going to need an ungodly amount of bandwidth.

    DisplayPort looks like it’s currently up to 80Gb/sec. Okay, not everyone is currently saturating that, but if you want comparable capability, that’s what you’re going to have to be moving from a datacenter to every user. For video alone. And that’s assuming that they don’t have multiple monitors or something.

    I can believe that it is cheaper to have many computers in a datacenter. I am not sold that any gains will more than offset the cost of the staggering fiber rollout that this would require.

    EDIT: There are situations where it is completely reasonable to use (relatively) thin clients. That’s, well, what a lot of the Web is — browser thin clients accessing software running on remote computers. I’m typing this comment into Eternity before it gets sent to a Lemmy instance on a server in Oregon, much further away than the closest datacenter to me. That works fine.

    But “do a lot of stuff in a browser” isn’t the same thing as “eliminate the PC entirely”.



  • You could also just only use Macs.

    I actually don’t know what the current requirement is. Back in the day, Apple used to build some of the OS — like QuickDraw — into the ROMs, so unless you had a physical Mac, not just a purchased copy of MacOS, you couldn’t legally run MacOS, since the ROM contents were copyrighted, and doing so would require infringing on the ROM copyright. Apple obviously doesn’t care about this most of the time, but I imagine that if it becomes institutionalized at places that make real money, they might.

    But I don’t know if that’s still the case today. I’m vaguely recalling that there was some period where part of Apple’s EULA for MacOS prohibited running MacOS on non-Apple hardware, which would have been a different method of trying to tie it to the hardware.

    searches

    This is from 2019, and it sounds like at that point, Apple was leveraging the EULAs.

    https://discussions.apple.com/thread/250646417?sortBy=rank

    Posted on Sep 20, 2019 5:05 AM

    The widely held consensus is that it is only legal to run virtual copies of macOS on a genuine Apple made Apple Mac computer.

    There are numerous packages to do this but as above they all have to be done on a genuine Apple Mac.

    • VMware Fusion - this allows creating VMs that run as windows within a normal Mac environment. You can therefore have a virtual Mac running inside a Mac. This is useful to either run simultaneously different versions of macOS or to run a test environment inside your production environment. A lot of people are going to use this approach to run an older version of macOS which supports 32bit apps as macOS Catalina will not support old 32bit apps.
    • VMware ESXi aka vSphere - this is a different approach known as a ‘bare metal’ approach. With this you use a special VMware environment and then inside that create and run virtual machines. So on a Mac you could create one or more virtual Mac but these would run inside ESXi and not inside a Mac environment. It is more commonly used in enterprise situations and hence less applicable to Mac users.
    • Parallels Desktop - this works in the same way as VMware Fusion but is written by Parallels instead.
    • VirtualBox - this works in the same way as VMware Fusion and Parallels Desktop. Unlike those it is free of charge. Ostensible it is ‘owned’ by Oracle. It works but at least with regards to running virtual copies of macOS is still vastly inferior to VMware Fusion and Parallels Desktop. (You get what you pay for.)

    Last time I checked Apple’s terms you could do the following.

    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of doing software development
    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of testing
    • Run a virtualised copy of macOS on a genuine Apple made Mac for the purposes of being a server
    • Run a virtualised copy of macOS on a genuine Apple made Mac for personal non-commercial use

    No. Apple spells this out very clearly in the License Agreement for macOS. Must be installed on Apple branded hardware.

    They switched to ARM in 2020, so unless their legal position changed around ARM, I’d guess that they’re probably still relying on the EULA restrictions. That being said, EULAs have also been thrown out for various reasons, so…shrugs

    goes looking for the actual license text.

    Yeah, this is Tahoe’s EULA, the most-recent release:

    https://www.apple.com/legal/sla/docs/macOSTahoe.pdf

    Page 2 (of 895 pages):

    They allow only on Apple-branded hardware for individual purchases unless you buy from the Mac Store. For Mac Store purchases, they allow up to two virtual instances of MacOS to be executed on Apple-branded hardware that is also running the OS, and only under certain conditions (like for software development). And for volume purchase contracts, they say that the terms are whatever the purchaser negotiated. I’m assuming that there’s no chance that Apple is going to grant some “go use it as much as you want whenever you want to do CI tests or builds for open-source projects targeting MacOS” license.

    So for the general case, the EULA prohibits you from running MacOS wherever on non-Apple hardware.





  • Milsim games involve heavy ray tracing

    I guess it depends on what genre subset you’re thinking of.

    I play a lot of milsims — looks like I have over 100 games tagged “War” in my Steam library. Virtually none of those are graphically intensive. I assume that you’re thinking of recent infantry-oriented first-person-shooter stuff.

    I can only think of three that would remotely be graphically intensive in my library: ArmA III, DCS, and maybe IL-2 Sturmovik: Battle for Stalingrad.

    Rule the Waves 3 is a 2D Windows application.

    Fleet Command and the early Close Combat titles date to the '90s. Even the newer Close Combat titles are graphically-minimal.

    688(i) Hunter/Killer is from 1997.

    A number of of them are 2D hex-based wargames. I haven’t played any of Gary Grigsby’s stuff, but that guy is an icon, and all his stuff is 2D.

    If you go to Matrix Games, which sells a lot of more hardcore wargames, a substantial chunk of their inventory is pretty old, and a lot is 2D.



  • Yes. For a single change. Like having an editor with 2 minute save lag, pushing commit using program running on cassette tapes4 or playing chess over snail-mail. It’s 2026 for Pete’s sake, and we5 won’t tolerate this behavior!

    Now of course, in some Perfect World, GitHub could have a local runner with all the bells and whistles. Or maybe something that would allow me to quickly check for progress upon the push6 or even something like a “scratch commit”, i.e. a way that I could testbed different runs without polluting history of both Git and Action runs.

    For the love of all that is holy, don’t let GitHub Actions manage your logic. Keep your scripts under your own damn control and just make the Actions call them!

    I don’t use GitHub Actions and am not familiar with it, but if you’re using it for continuous integration or build stuff, I’d think that it’s probably a good idea to have that decoupled from GitHub anyway, unless you want to be unable to do development without an Internet connection and access to GitHub.

    I mean, I’d wager that someone out there has already built some kind of system to do this for git projects. If you need some kind of isolated, reproducible environment, maybe Podman or similar, and just have some framework to run it?

    like macOS builds that would be quite hard to get otherwise

    Does Rust not do cross-compilation?

    searches

    It looks like it can.

    https://rust-lang.github.io/rustup/cross-compilation.html

    I guess maybe MacOS CI might be a pain to do locally on a non-MacOS machine. You can’t just freely redistribute MacOS.

    goes looking

    Maybe this?

    https://www.darlinghq.org/

    Darling is a translation layer that lets you run macOS software on Linux

    That sounds a lot like Wine

    And it is! Wine lets you run Windows software on Linux, and Darling does the same for macOS software.

    As long as that’s sufficient, I’d think that you could maybe run MacOS CI in Darling in Podman? Podman can run on Linux, MacOS, Windows, and BSD, and if you can run Darling in Podman, I’d think that you’d be able to run MacOS stuff on whatever.





  • The publishers can do it via uploading beta branches, but there’s also a way to tell the Steam client to fetch old versions independently of that. I remember it coming up specifically with Skyrim, because updates broke a lot of modded environments, and it takes a long time for a lot of mods to be updated (during which time people couldn’t play their modded installs).

    searches

    https://steamcommunity.com/app/489830/discussions/0/4032473829603430509/

    The download_depot Steam console command.

    The above link is about Skyrim, but also links to a non-Skyrim-specific guide that talks about how to obtain manifest IDs for versions of other games.

    But, yeah. It’s really not how Steam’s intended to be used, and I imagine that hypothetically, one day, it could stop working.

    There are also IIRC some ways to block Steam from updating individual games, but again, not intended functionality.

    searches

    https://steamcommunity.com/discussions/forum/0/3205995441631274440/

    If you specifically want control over game updates for some game, then GOG can be a major benefit for that.

    One concern I have is that games can be purchased — Oxygen Not Included, for example, was purchased by Tencent, which added data-mining. Fortunately, in that case, Tencent was open about what they were doing, and allowed players to opt out — if they let Tencent log data about them, they could “earn” various in-game rewards. But I could imagine less-pleasant malware being attached to games after someone purchases IP rights to them and just pushes it out. Can’t do that with GOG, since there’s no channel intrinsically available to a game publisher to push updates out (unless the game has that built-in to itself).