Hey, so I have brand new HDDs I intend to put in a btrfs software RAID. They're Seagate ST4000VX016-3CV104 4TB Skyhawks. Workload is basically write and forget, I will probably never delete a thing.
However I decided to test them first and noticed that after writing about 160 GB, some SMART counters have gone up significantly. Read error rate went from 6.632 to 90.238.872 for example (seemingly all correct by hardware ECC), seek error rate from 143 to 87.661.
Am I reading things correctly? This does not seem like the way healthy drives should behave, does it? It similar on all of them tho. Are they just trash-tier drives they somehow got to work with ECC?
Tons of people making Python comparisons regarding indentation here. I disagree. If you make an indentation error in Python, you will usually notice it right away. On the one hand because the logic is off or you're referencing stuff that's not in scope, on the other because if you are a sane person, you use a formatter and a linter when writing code.
The places you can make these error are also very limited. At most at the very beginning and very end of a block. I can remember a single indentation error I only caught during debugging and that's it. 99% of the time your linter will catch them.
YAML is much worse in that regard, because you are not programming, you are structuring data. There is a high chance nothing will immediately go wrong. Items have default values, high-level languages might hide mistakes, badly trained programmers might be quick to cast stuff and don't question it, and most of the time tools can't help you either, because they cannot know you meant to create a different structure.
That said, while I much prefer TOML for being significantly simpler, I can't say YAML doesn't get the job done. It's also very readable as long as you don't go crazy with nesting. What's annoying about it is the amount of very subtle mistakes it allows you to make. I get super anxious when writing YAML.