So I need to start off this post with a few full disclosures because apparently if I’m not explicit with some remarks, everyone will focus on the obvious elephants in the room.
Note: All advice is mere suggestion. Nobody knows your situation better than you do. Exercise your best judgement. Not a single token was consumed in the generation of this post.
Now that we have that out of the way, I want to talk about a trend that I see all too common in our industry.
There is this trend where management / executive leadership makes a decision, like downsizing the company, and the consequences of those decisions often fall on the employees. Now obviously business sometimes have to make hard decisions to stay afloat, like cutting jobs, reducing the workforce, whatever you want to call it.
I’m here to tell you that you don’t have to let the stress from those decision drown you.
In fact, I’m here to tell you that you shouldn’t. A lot of the time, but not all …
I’m so tired of fighting for good engineering practices. Clean code, high quality tests, pragmatic use of AI, code de-duplication, extensible design, and so on. Yes all these things are good - but they have never once rewarded me.
The only thing I ever see get rewarded are
It doesn’t matter how clean we get the code. It doesn’t matter how many defects we prevent. It doesn’t matter that spending 20% more now means every quarter for the next 3 years we spend 10% less. It doesn’t matter if you convince your tech lead or EM or PM to go a different way. It doesn’t matter if you have high quality & actionable metrics, you just need numbers that look good. 200% increase in usage sounds better than “we have 2 new users”. None of it …
The API call took 200ms. Measured it, verified it, fast as hell.
Three weeks after launch the client tells me users are complaining the results “don’t feel right”. Not wrong, not slow. Just don’t feel right.
I spent two days looking for bugs. Nothing. Results were correct, latency was fine.
Then a user screenshot came through. The user had written: “It feels like it’s just making something up. It comes back too fast.”
The feature was a search over a knowledge base. In the user’s mental model, that should take a second. When it came back instantly, it broke their model - they read it as “this didn’t actually process anything.”
I added a minimum display time of 1.2s with a loading animation. API still ran and returned in 200ms. User sees 1.2 seconds of “working”.
Complaints stopped within a week.
The part I can’t shake: the technically correct solution was perceived as broken. The technically …
Got laid off under the label of “business restructuring,” and the way it was handled says a lot about how some companies operate.
I understand layoffs happen. Markets change. Businesses make decisions.
But what is increasingly frustrating is seeing profitable mid-sized companies hide behind the broader tech-layoff narrative created by giants like Oracle and others—using “restructuring” as a blanket justification while continuing executive spending and operating from positions of financial strength.
In my case:
- I was in my regular stand-up call minutes before being informed.
- No prior indication that my role was at risk.
- Within 10 minutes, all access was revoked.
- Decision appeared to be driven remotely by leadership with little to no understanding of the team, product, or the work being delivered.
What makes it worse is when capable engineers doing meaningful work—especially in fast-moving areas like AI—are reduced to spreadsheet entries in decisions made oceans away by …
I’ve been using AI coding tools heavily for the past while, and I’ve settled into a workflow that works for me.
When I let AI go fully agentic on a feature, even with a good spec, I feel disconnected from the codebase. The code is there. It works. But I don’t generally get how it works. I just know the result. I don’t know what it actually does. And that bothers me.
I got burned once. Wrote a short prompt, let AI implement a whole feature, went to test it, and the thing totally diverged from what I wanted. I couldn’t even course-correct because I had no idea what it built. Had to scrap everything and start over.
Since then I’ve been doing spec-first, but with my hands in it. I generate a spec, then I argue with it, poke holes in it, point out flaws, architect it myself. Once it’s workable I implement the skeleton by hand. The schema, the core logic, the architecture. Then I feed it back to improve the spec more. As I implement I find more …
I was thinking about the NSA scandals from years ago, the wiretapping, the underwater cables, the backdoors in datacenters. It was a massive international drama.
But then you look at Cloudflare. By design, they are a massive, legal Man-in-the-Middle. They decrypt, inspect, and re-encrypt the traffic of millions of websites. We’ve reached a point where “privacy” means “hidden from everyone EXCEPT Cloudflare.”
It’s the ultimate irony: developers are so obsessed with “security” that they put their entire stack behind a single US-based entity that holds the private keys to half the internet. We basically did the NSA’s job for them, and we did it voluntarily because the dashboard is pretty and the CDN is free.
Am I the only one who finds this centralization terrifying, or have we just accepted that true end-to-end privacy is dead in the name of DDoS protection?
Hello all! Three weeks ago I asked a friend of mine to help me set up a Plex media server, I purchased a mini PC on the cheap (not pictured), an enclosure (not pictured), some hard drives, and while we were grabbing the supplies I saw this adorable little Pironman and grabbed it + a Pi5 as well. Setting up the Plex server with the arr stack was so fun and easy that I looked into what else I could host, wound up switching all of my music, e-books, audiobooks, podcasts, etc over to my new server. I have my Kobo e-reader working with Grimmory (huge shout out to those devs).
In the process of implementing the 3, 2, 1 method for backup and eventually will switch my cloud storage over too!
These selfhosted projects have been such a joy to do, I am so grateful to the community who has created such amazing software (and I’ve made sure to tip the devs when possible). Also, I’ve love doing these so much that I’ve begun writing my own project, inspired by Homarr as a sort of home management …
MXRoute is popularly recommended in this subreddit. Selfhosting e-mail is extraordinarily difficult (at least achieving reliable deliverability is very challenging) so many selfhosters end up using an established e-mail provider to do this service for them. MXRoute is a fairly large e-mail services provider, providing both direct-to-customer services and powering various resellers – they are certainly discussed in this sub in plenty of past threads.
I would like to bring it to the attention of the community some recent issues with the company owner, Jar, that may make you wish to avoid doing business with him.
I am not a customer of MXRoute. Rather, I became aware of them due to a thread on another forum I post on. In that forum thread while discussing another unrelated provider, Jar (owner of MXRoute posted):
I mean I’ve terminated for a review before (not JUST a …
I have a VPS that I use to reverse proxy incoming web requests to my self-hosted services at home over wireguard. I got an alert recently that CPU usage was spiking, so I logged in to see a newly-created user running masscan.
The VPS runs 3 publicly-exposed services: nginx, ssh, and wireguard.
It was hardened as follows:
I checked, and I can’t find any relevant CVEs for nginx, ssh, or wireguard.
The logs show the following.
At 07:38, I see an authentication failure on, followed by systemd unexpectedly rebooting:
Mar 30 07:38:20 login[695]: pam_unix(login:auth): check pass; user unknown
Mar 30 07:38:20 login[695]: pam_unix(login:auth): authentication failure; logname= uid=0 euid=0 …
New v26.1 adds direct scanning for NZBs inside archives, speeds up yEnc decoding with rapidyenc, and updates core extraction libraries.
King’s Day is here and we’re offering a complete Usenet setup in one plan: Unlimited Eweka, 2TB Easynews, and a premium VPN - all included free.
€2.50/month for the first 15 months (€37.50 total) that renews at €71.88 per year.
Available for both new and existing users.
New users can sign up and get started right away.
Existing users can use the same link to extend their plan and keep the discounted rate longer.
What’s included:
We’re so much more than just Usenet. Included with this deal is a VPN so you can have the most private and secure online experience, and access to Easynews for mobile Usenet access on any web-enabled device (no software required).
We are proud to operate the most reliable backbone, which means stable speeds, consistent access, and one of the most complete Usenet archives …
Guess King’s Day is starting a little early this year… so we figured we’d join in.
Get a complete Usenet setup - including multiple backbones, web access, and a full security suite - starting at $1.99/month.
$1.99 for Unlimited Newshosting + 1TB Tweaknews + 1TB Easynews + PrivadoVPN
Why this bundle matters:
Two Backbones (US + EU):
Newshosting’s US backbone combined with Tweaknews’ EU backbone gives you stronger completion, better speeds, and more consistent access across Usenet.
Fastest Way to Access Usenet (Easynews):
Access Usenet directly from your browser on any device — no setup required.
Built-in Privacy + Protection (PrivadoVPN):
Secure your connection with encryption, plus ad blocking and threat protection across all your devices.
King’s Day Deal Includes:
A collection of Kingsday promotions from various Usenet providers.
I’ll keep this thread updated as new deals come in, so feel free to check back or share additional offers.
| Provider | Price | Backbone | Features | Retention | Connections | Server/Policy | Source | |:-|:-|:-|:-|:-|:-|:-|:-| | EasyUsenet | €2.48/mo (1yr) €1.98/mo (2yr) €1.48/mo (5yr) | Abavia | - | 3800 days | 100 | EU - US / DMCA | Post | | Eweka | €2.50/mo (€37.50 / 15mo) | Eweka | 2TB Easynews + VPN | 6443 days | 50 | EU / NTD | Post | | NewsgroupDirect | $38/yr (Triple Play), $50/yr (Grand Slam) | UsenetExpress, Giganews, Uzo Reto, Its Hosted | Triple Play, Grand Slam | 5719+ days | 100 | US/EU / DMCA & NTD | Post | | Newshosting | $1.99/mo ($29.85) | Omicron | VPN + 1TB Easynews + 1TB Tweaknews | 6446 days | 100 | US / EU / DMCA | Post |
Check this Usenet FAQ:
https://www.reddit.com/r/usenet/wiki/index/
Check this Provider Map for a layout of Usenet providers and backbones: …
Finally took some big steps towards my end game Plex server. Running off a Beelink S12 Pro and the Unas Pro as my main storage device. Bought my Plex Pass about 7 years ago and haven’t looked back. Shout out to all the helpful people on this sub putting in the work helping new people into the hobby. I still remember thinking that an 8TB drive was insane and “there’s no way I’ll be able to fill this up”.
The cable subscription went years ago, and she’d been relying on the free channels you can still get, but the reception was always spotty. After watching her scroll on YT shorts/videos, and seeing the *abysmal* content that she was being shown, I took it upon myself to give her access to her own personal Plex server.
It’s been amazing, she loves it; it’s now on her iPad and her computer. She’s thrilled to be watching things that she remembered and loved and haven’t seen in years.
It’s a small thing, but it’s really nice that this can exist, just for her.
On a personal note, I’m relieved that YT has been almost completely forgotten in her house, and that she has good things to watch that we’ve already vetted. The elderly are vulnerable in so many ways, and this makes me feel like her world is a bit safer.
Edit: this was a few days ago. It’s back now. I see that thank you everyone.
now I’m not talking about my server in particular, i mean the service.
i can’t reach it on any device and not just on my server side, the whole site.