Noticed in job listings. All the shitty slop startups and grifters want ”AI first, Lovable, replit”
The serious software engineer listings will have for example ”TS, postgresql, nodejs”
IMO this is actually great. Let the vibe coders sling their slop in their containment zone jobs
At a certain point, the bottleneck in shipping isn’t code; it’s tracking down context. Before even writing a line, I’m jumping between tools trying to find scattered specs, old decisions, random docs, and half-written tasks across Slack, Notion, email, whatever else.
The bigger issue is that all this data lives in different formats and locations; even something like user info looks different depending on where you check. It slows everything down.
We tried solving this by building task-based patterns that organize relevant context together and using
fewer tools overall to stay focused. Curious if anyone here has found better ways to manage the chaos that isn’t just “communicate more” or “set better processes”?
I use Claude.ai in my work and it’s helpful. It’s a lot faster at RTFM than I am. But what I’m hearing around here is that the C-suite is like “we gotta get on this AI train!” and want to integrate it deeply into the business.
It reminds me a bit of blockchain: a buzzword that executives feel they need to get going on so they can keep the shareholders happy. They seem to want to avoid not being able to answer the question “what are you doing to leverage AI to stay competitive?” I worked for a health insurance company in 2011 that had a subsidiary that was entirely about applying blockchain to health insurance. I’m pretty sure that nothing came of it.
edit: I think AI has far more uses than blockchain. I’m looking at how the execs are treating it here.
Is this a universal experience? It feels like every project I’ve worked on has suffered from bad decisions years ago that are too deeply entrenched in the architecture to fix. Maybe there is a way to fix the problem but the time and cost to do so is a non-starter with management. The only choice is to chug along and deal with it while having occasional meetings to design “bandaids” that lets everyone pat themselves on the back for doing something. Sorry if this is more of a rant than anything else, but I’m curious if anyone has anecdotes about longstanding applications at their own jobs that actually feel like they were well built and stood the test of time and scale.
Anyway, let’s focus on integrating new AI agents and building custom MCP servers to demo “Hello World” level complexity outputs to upper management so the paychecks keep coming.
Deleted old post and posting again with more clarity around testing [thanks everyone for the feedback]. Found it to be a super interesting article regardless.
Airbnb recently completed our first large-scale, LLM-driven code migration, updating nearly 3.5K React component test files from Enzyme to use React Testing Library (RTL) instead. We’d originally estimated this would take 1.5 years of engineering time to do by hand, but — using a combination of frontier models and robust automation — we finished the entire migration in just 6 weeks.
Ifeel like I know the “big names” (Nextcloud, Vaultwarden, Jellyfin, etc.), but I keep stumbling across smaller, less talked about tools that end up being game changers
Curious what gems the rest of you are running that don’t get as much love as the big projects. (Or more love for big projects -i dont descriminate if it works 😅) Bonus points if it’s lightweight, Docker-friendly, and not just another media app.
What’s on your can’t live without it list that most people maybe haven’t tried?
G’day r/selfhosted
I’m one of the core maintainers of Drop, the self-hosted Steam platform. It’s our aim to replicate all the features of Steam for a self-hosted and FOSS application.
We just released v0.3.0, which brings a bunch of new improvements. But since most of you will hear about Drop for the first time, here’s what it can do:
To give it a whirl, check …
Website | Github | Discord | Demo
Hey y’all, the team is back with an exciting update: RomM 4.0 is out, and it’s our most feature-packed release yet!
RomM is a self-hosted app that allows you to manage your retro game files (ROMs) and play them in the browser.
RomM 4.0: A Major Leap Forward for Retro Game Management - Fediverse.Games Magazine
And it’s crazy good ! It’s on LG6, with 4gb of ram and quad-core Qualcomm. Only 0.4W on idle (while running n8n server and ssh session) ! And… The phone isn’t rooted ! Just termux, and some debloating with adb. Sadly docker is not supported and had to build lot of things from source, it take some efforts but it’s free ! And it work great when correctly done. Stop buying server use your old phones 🫵
I made a video about copyparty, the selfhosted fileserver I’ve been making for the past 5 years. I’ve mentioned it in comments from time to time, but never actually made a post, so here goes!
Copyparty is a single python script (also available for docker etc.) which is a quick way to:
The main focus of the video is the features, but it also touches upon configuration. Was hoping it would be easier to follow than the readme on github.
This video is also available to watch on the copyparty demo server, as a high-quality AV1 file and a lower-quality h264.
Hopefully this ends the DS issues. I’m not gonna re-enable my RSS feed just yet but if YOU do please let us know how it goes! :)
edit - DS got back to me and said “Yeah it’s resolved, the cart RSS feed ended up showing full site feed due to an oversight”
This would explain the random nature of all the files that got queued up in our RSS..
Was this scheduled?
It’s back!
It’s down again!
It’s back!
Checking to see if anyone has heard what’s gong on with dognzb. I’m used to frequent API outages, but it seems to have been offline for a few days now for me?
I recently added a 250GB block from NewsgroupDirect (NGD) as a backup to help fill in any gaps. To test it out, I set it to priority 0 (higher priority) and my other server to priority 1 in my config, grabbed a few NZBs and monitored the downloads.
Interestingly NGD only hit about 36% completion on the articles. Does NGD only deduct from the 250GB quota based on data that’s actually downloaded successfully (i.e. the ~36%), or does it also count failed attempts toward the block?
I’m new to usenet and have been using dognzb but so far in the trial account it limits you to 10 downloads and the site is always down for maintenance. Any suggestions on a better indexer?
Nevu is a total redesign of Plex’s UI, powered by the Plex Media Server API and bundled with its own web server
Want to help shape the future of Nevu? Android and Android TV versions are now available for closed private testing. Sign up here: …
The B is not centered correctly in the iOS app and hasn’t been for a while. Honestly unusable.
I’m finally ready to start my Plex journey!! Purchased a verbatim ripper this past week and work had a laptop they were about to throw out that they said I could have. So happy to say bye to streaming!!!
I told my buddy he should get a $20 Onn box when he was having issues with his older Roku. A few days later, I see this. lmao
So Plex specifically says it does not collect the content titles of Personal Content from your Plex Media Server (stated here “Plex does NOT collect: Content titles of your Personal Content.”).
However I just received my data request from Plex and it shows that the pushSend event for “Discover | Engagement | Push | Rating Reminder v2” seems to be accidentally exposing the exact content titles of personal content in the field dataFields__transactionalData. Below is a redacted excerpt.
Take from this what you will but I after having concerns of data privacy for years I have decided to move to another platform. You can validate my findings by requesting your own data here.
Field record Record
userId
email
eventName pushSend
timestamp 2025-01-18T10:58:50Z
dataFields__campaignId
dataFields__campaignName Discover | Engagement | Push | Rating Reminder v2
dataFields__channelId
dataFields__contentAvailable FALSE
dataFields__contentId …