It might be expensive to get even more harddrives to have one or two remote backups at hand, but please do it right now.
​
I’ve got a RAID 6 setup with around 100 TB usable space. Currently around 60 TB in use. Within a short time two hard drives failed and we immediately fixed it. The problem was probably due to the RAID Controller, which suddenly made 3 of them die, after the two were fixed. We tried everything, but unfortunately everything is either deleted or corrupted.
​
LUCKILY we have ONE Backup at a different place which has most of the files. While it will take some time to rebuild everything, We are very lucky to have that backup. After rebuilding everything, I’ll definetely have one or two more backups. The price for the hard drives is nothing compared to the value of the data and the time we spent on our media server.
​
So to sum it up: RAID is not a Backup - Backup your files right now!
More about that at: …
NZBPlanet has given us an update on their server status and what they’re doing for users that were impacted:
A little update, we have moved back to the fast sever today, nzbs are still populating daily from the prev setup, all services should be working fine, if you see a api error just click the test button on sonar and then save and it will reset it for you, Thank you to our users for there patience during our downtime, we have been around for 11 years this month, and have no plans to go anywhere in the next 11, at the end of the month we will run a birthday sale, we will also add on a month or 2 for all premium users as a thank you for there patience during the short downtime we had
Their indexer still works, billing still works, but I can’t actually log in to the website to update payment methods or contact info.
When I click “Login” I get this website error: https://i.imgur.com/XrgHSR5.png
Tried doing a password reset (LOL too long of a password? https://i.imgur.com/jWnHC0G.png) and I got the same issue :(
Doing a search on /r/usenet nets me a lot of posts saying dump Supernews, and looking at my nzbget traffic totals it’s done 40% (608 GB for 2022) of provider #2 (1.4TB) and 25% of provider #1 (2.4TB) in terms of sheer traffic.
But if I can’t login, I can’t cancel the service. I’ve e-mailed their support and I’m waiting to hear back, but is there anyone else here still using Supernews AND able to log in to their control panel?
u/greglyda
Since > 24 hours there are no new headers on Newdemon.
Support is back online tomorrow.
Right now I have newshosting as my provider and drunken slug as my indexer. I need another provider as I’m not finding some files. I get that I need a provider from another backbone, and the wiki has a map of providers vs backbones. Do I just pick a second provider / backbone at random? Or, is there some sort of method where some combinations of backbones are better than other combos?
I searched up something and I somehow stumbled across a Sonarr server (and Radarr, etc.). Everything is secured besides the Sonarr UI/page.
Okay, so I am running Plex with Radarr, Sonarr, Jackett, Bazarr, and Tautulli. So far I’ve been able to figure everything out myself via the individual plugin wiki pages. I’ve now resorted to spending my last 3 hours on YouTube to find the answer to this question to no avail…
​
Radarr will initiate a download (when I trigger it) in the Discovery section. It’ll talk to Jackett for the .torrent search engine of my choice and it’ll use my torrent software of choice on my system. THAT part works.
What I’m curious about is… how do I get the completed downloads to automatically be moved into my designated library folder, renamed, and added to a sub-folder with its respective name? I have been trying to find if and how Radarr will do this as well as other *arr plugins.
Radarr –> Jackett –> .torrent file –> qBittorrent –> \..\Downloads\But how do I get it from \..\Downloads\ over to …
Is there some way to do this? I’ve tried {MediaInfo Width}x{MediaInfo Height}
but no go.
Looking for some help. I’m getting the error “ All indexers are unavailable due to failures ” When I check the logs and try to test the indexer, I see couldn’t resolve host. When I connect to the container that’s running Sonarr and do a ping, I’m able to resolve the indexers api url and any other url.
I’ve tried restarting the container, doesn’t help. It’s on latest stable release. No other containers are having a problem.
​
Any suggestions?
[v3.0.9.1549] NzbDrone.Core.Indexers.Exceptions.IndexerException: Indexer API call returned an error [{
"code": 0,
"message": "Couldn't resolve host"
}]
at NzbDrone.Core.Indexers.BroadcastheNet.BroadcastheNetParser.ParseResponse (NzbDrone.Core.Indexers.IndexerResponse indexerResponse) [0x00345] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\Indexers\BroadcastheNet\BroadcastheNetParser.cs:93
at …
I seem to have exhausted all remedies, it appears I will not be able to get back the drive letter. r/Techsupport is trying but nothing has worked so far today.
The only real issue is Sonarr and radarr. How do I handle this? Sonarr thinks my tv shows are all on F:\ TV shows. They really are on Q:\TV Shows. Do I just add the new library? How do I keep from having a lot of missing shows or duplicates in Sonarr?