As the title suggests I was laid off on Friday along with a handful of others. I was in my last position for close to 5 years. For 5 years I worked M-F with my coworkers, had the same daily meetings together, went through the same BS together, all of it.
Now it’s Monday morning and I’m sitting at my home office desk feeling like I’m just floating in the void. No meetings, nothing on my calendar, no deadlines to meet, no one from work to talk to.. no responsibilities at all. It just feels weird and I don’t know how else to say it or who to say it to who might also understand. Financially I’m fine, my wife still has her great paying job, we’ve got maybe close to a year of runway sans that, no kids, no mortgage.. I realize my situation could be far worse. So I guess my sadness isn’t because of the income loss, it’s more that all of the work and relationships I built in these 5 years just got flicked off like a light switch. It would make me tear up thinking about everyone fading out into …
let that sink in. i applied for a level below my current title just to get my foot in the door at a company i really wanted. and they said i lacked confidence
i lead a team of 12. i present to the board. i have been the most senior engineer in the room for most of my career
but 45 minutes on a zoom call with strangers evaluating my every word and apparently i dont seem confident enough to be… a senior engineer
i dont even know how to respond to that feedback. has anyone else had the experience of being more qualified than the role and still failing because of how interviews work
There is no shortage of articles and videos and whatever talking about the dangers of over-engineering, and there are plenty of catchy acronyms too - YAGNI, KISS, you name it. And while few of them are really wrong, I think the problem is with the fact that we focus on this problem much more than we should, all while neglecting under-engineering, which is much more prevalent and much more dangerous
I don’t know about you (despite the clickbaity title claiming otherwise), but I can confidently say for myself - all the projects I worked on suffered from under-engineering, to a greater or a lesser degree. That was the real problem, and I can’t really recall thinking ‘Yeah, they surely wasted too much good thought on this one’ - it was always ‘Did anyone think at all when writing this?’
We all know how under-engineering manifests: sneaky shortcuts through architectural boundaries, god classes, accidental, implicit coupling, silenced compiler warnings …
Every time I broach this topic, I hear the same thing. “Our well oiled machine actually does 1 week sprints… Actually, we don’t do sprints at all, we’re just continuously delivering and always refining the backlog!”
Good for you. Now let’s talk to the other 90 people in the room.
I’ll be the first to say that I don’t think there is a one-size fits all approach for every team. So take this all with a grain of salt.
However, I think most teams put more effort into trying to make work seem deliverable within a 2 week timeframe, and waste more hours on grooming and refining ceremonies than they would if they had slightly longer iterations.
Between grooming, retro, planning, review… That’s often at least 1-2 days of context switching.
Also I’ve found nobody is estimating tickets honestly. Sure, the simple stuff is easy. But anything that is slightly complex, you end up needing to break it down further and further and …
We hired two junior devs in the last quarter. Both passed the interview fine. Both can produce working code reasonably fast. But something is off in a way I have not seen before.
When something breaks, they do not debug it. They paste the error into ChatGPT and apply whatever it suggests. If that does not work, they paste the new error. I watched one of them go through four rounds of this before I stepped in and showed them how to read the stack trace. They had never done that before.
Code reviews are also different. When I ask “why did you structure it this way?” I often get a blank look. The code works, it looks reasonable, but they cannot explain the reasoning because there was no reasoning. They described what they wanted and the AI produced it.
I am not blaming them. They learned to code in an environment where AI tools were available from day one. Of course they use them. But the gap between “can produce working code” and “understands what the …
I built a thermal printer appliance that runs entirely on your local network. No cloud, no accounts, no subscriptions. Turn a dial, press a button, and it prints weather, news, RSS feeds, email, or whatever you need on 58mm receipt paper.
Self-hosted details:
The enclosure is hand-built from walnut and brass - I spent six years as a furniture maker, so the hardware side matters to me as much as …
Open Source. One Docker container. Browser-based. Everything local.
Your files never leave your machine.
30+ tools. Resize, crop, rotate, compress, convert, strip metadata, watermarks, reusable pipelines, full REST API, background removal, object eraser, OCR, face/license plate blur, up-scaling and more.
I’m building this to be genuinely useful, not another AI-wrapped gimmick or subscription trap. No cloud lock-in, no “sign up to continue,” no features paywalled behind a pro tier. Just a tool that does what it says.
I’m actively looking for feedback from people who would actually use this. What tools would you want? What’s missing? What’s annoying? What would make you switch from whatever you’re using now?
GitHub: https://github.com/stirling-image/stirling-image
Documentation: https://stirling-image.github.io/stirling-image/
The Jellyfin team just dropped v10.11.7 and the patch notes contain a pretty heavy warning. It’s listed as a minor release, but the devs have explicitly stated:
“WARNING: This release contains several extremely important security fixes. These vulnerabilities will be disclosed in 14 days as per our security policy. Users of all versions prior to 10.11.7 are advised to upgrade immediately.”
Hello everyone! Google just released their new open-source model family: Gemma 4. This means you can now run a ChatGPT like model at home.
There are four models and they all have thinking and multimodal capabilities. There’s two small ones: E2B and E4B, and two large ones: 26B-A4B and 31B. The 31B model is the smartest but 26B-A4B is much faster due to it’s MoE arch. E2B and E4B are great for phones and laptops.
To run the models locally (laptop, Mac, desktop etc), we at Unsloth converted these models so it can fit on your device. You can now run and train the Gemma 4 models via Unsloth Studio: https://github.com/unslothai/unsloth
Recommended setups:
No is GPU required, especially for the smaller …
This is the first release candidate of version 5.0. Please note the breaking changes described below!
Articles per request set to 2 by default.empty_postproc as it is no longer needed.https://ninjacentral.co.za/register woo — finally
It has been a bit over a year since the Australia, Asia and South American usenet servers launched and Africa was always the holdout for worldwide usenet domination. “Got to catch ‘em all” type of thing. :)
I am happy to be able to say that finally, Africa is live for users on Frugal Usenet, Blocknews and Usenetnow. Like Asia and Australia, Africa has long been under-served by…well, all (legit, Free-Usenet doesnt count) commercial usenet servers since the beginning really. So we figure we will take the lead yet again and see what happens.
The Africa server addresses for the service you are using can be found in the FAQ pages and newsreader set up guides on the respective websites.
For true global usenet access (two servers and saying “global coverage” does not make it so), users of the above services now have access to the following locations:
I said it before, because local usenet is better usenet, …
Hello everyone! My apologies, I have been busy and should have made this post (along with the post about UE supporting pipelining) a few weeks ago. But we finished rolling out post-quantum key exchange on all of our NNTP servers. Figured the community would want to know.
There’s been a lot of talk here and other places about privacy in this space. Who owns what, who’s logging what, whether providers actually give a damn about their users. We figure the best way to deal with privacy is to just go make it better. So that’s what we did.
The short version: we’re now using X25519MLKEM768. It’s a hybrid that pairs the X25519 encryption you’re already using with ML-KEM 768, which is a quantum-resistant layer. You get the proven stuff you trust today plus protection against the quantum threat down the road.
On your end, nothing changes. Your client works the same as it always has. If your system runs OpenSSL 3.5.0 or newer you’ll automatically get …
Once again a reminder that the registration opened up today.
And yes, it’s for adults.
So today I discovered that all of my files on my Plex server have all had their file name extension change at the end to ‘want to cry’, I don’t how this has been done. I can see that there is a txt file also called ‘I want to cry’ in each folder which I have not open.
Unfortunately, not knowing what I was doing and trying to get the file name extension to all end with MKV, I choose a folder and selected all the files in that folder, I then selected ‘Rename’ thinking that I could remove the I want to cry extension in one swoop, but I ended up keeping that I want to cry extension and now have 500+ files with ‘File 1.want_to_cry’, that is files from File 1 - File 500+ I want to cry.
Has this happened to anyone else before and is there a way correct way to fix this? I’m on a MacBook if that helps.
Also what would I need to do to determine not only how this happen and where it came from, but to try and prevent it from happening …
It’s been almost 20 years since Plex was officially created, and it’s come a long way. We’ve gained and lost many features over the years (Plex Arcade, and I just found out about Plex Cloud), with recent changes being mainly UI/UX and business model related.
So what do you expect (ideally or realistically) in 10 years?
Just installed Tautulli for the first time and saw this, and haven’t seen mention of it anywhere despite being heavily active on this and similar subs for the last week.
RCE, path traversal and SQL injection in a single release! It’s worrying that these have gone unnoticed in the tool for so long, but it’s a nice side effect of the Huntarr fiasco that there are more eyes on this stuff now, hopefully this is the beginning of the Plex/*arr/self-hosting communities and ecosystem becoming more security-conscious. And of course, thanks to all those contributors actually fixing these vulns so the rest of us can keep using them safely.
In my case, I can still watch content remotely on pre-signed-in devices
but not from web browser
These weird dumb posters are among the choices on Plex for It’s Always Sunny in Philadelphia. Is there an easy way to report them to get them removed?