To connect the SpeedyBee F405 V3 to GPS use the following wiring from the UART6 port,
See the table below, the orientation of the GPS module and Flight Controller should be as the image is arranged above (GPS module slightly to the right of the flight controller for annotation purposes).
SpeedyBee FV04 V3 Flight Controller
Beltian GPS BN-880
Connector Pins UP
Connector Pins UP
To configure the GPS in Betaflight, set the UART6 mode to GPS in the configuration tab, and the baud rate to Auto.
I had a look on Aliexpress and decided to purchase a Goldenfir 2TB SSD for review from the “Computer & SSD Storage Factory Store” for my Proxmox NAS. I compare it with the Crucial BX500 as I have one on hand too.
Internal 2TB SATA SSD Prices
In total for a 2TB SSD I paid £70.61 which is about £10 cheaper than name-brand SSDs for the same capacity from Amazon. I was skeptical of the price but I decided to test it first before putting it in the NAS.
Integral V Series (INSSD2TS625V2X)
Crucial BX500 (CT2000BX500SSD1)
Samsung 870 QVO (MZ-77Q2T0)
2TB SSD Prices as of 24/04/2023
Checking the SSD using h2testw.exe
As soon as the SSD arrived I ran it through its paces on h2testw.exe to check that it was real and all 2TB was available. The process took several hours so I just left it running while I was at work.
The SSD passed both the write and verify test. All 2TB is available.
Warning: Only 1907547 of 1907711 MByte tested.
Test finished without errors.
You can now delete the test files *.h2w or verify them again.
Writing speed: 85.0 MByte/s
Reading speed: 339 MByte/s
The Goldenfir 2TB SSD in CrystalDiskInfo
I also opened the SSD in CrystalDiskInfo, which confirmed it was brand new if anything. It did have a power on presumably from the factory during testing.
Goldenfir 2TB SSD in CrystalDiskMark
I ran the SSD through CrystalDiskMark, the most crucial test to me as it would show how it compared to other SSDs.
It looks like the SSD performs only slightly worse than the Crucial BX500, Tested using an external USB enclosure.
Here is a comparison with the Crucial BX500.
And for fun here is a comparison with the Crucial MX100 from 2014.
My Review and Closing Thoughts
Overall, I’m happy. It performs slightly worse than competitors but it’s negligible and I am comfortable keeping the SSD forever so I am not too worried about secure erase.
I am moving this website from Vultr to my Proxmox Ryzen 5 3600 virtualization server at home because it is cheaper and I no longer need to host my applications externally.
To protect my home network, I isolated the web server from my home network traffic. This way, even if the website is compromised, my home network will likely be safe from any attacks.
The server doesn’t require much to run. It has run on almost always the cheapest hardware/software available on various cloud platforms for years.
The main problem was that I didn’t get around to making a VLAN to isolate traffic at a network level from my home network.
Having a VLAN allows you to isolate networks, which I will use to split my home network and the network used by the web server VM.
You can read more about my home network here but it needs a bit of an update.
Preparing a backup of WordPress
This website runs on WordPress. WordPress makes backup/restore easy as import/export tools are built-in.
To keep costs down, I have a small WordPress site. Jetpack (I think) compresses and serves images, and almost all media is not hosted on the VPS directly.
I will need to simply download everything from the admin panel and then upload it to the clone.
I also want a new copy of WordPress because it’s been a while, my first article is from 2014 for example.
Setting up a Home VLAN for the VM
I have a VM running on my home server and disallow the VM to communicate with other devices on my home network but allow access to the internet.
External devices are prevented from being able to connect to the VM using my Ubiquiti router firewall.
I have a few VLANs going around the house so it was just a case of passing the new VLAN over ethernet tagged with its regular traffic to the VM and then using Proxmox to connect the VM using the same tag.
Configuring Proxmox to use the Tagged VLAN Trunk
Because I have not used a VLAN before to tag traffic to Proxmox. All of my previous VMs used the same network as Proxmox.
I was able to set the port the Proxmox server used as both a tagged trunk for VLAN 70 and an untagged on VLAN 20.
The way my home network is set up, all LAN traffic arrives at my switch on VLAN 20 and then VLAN 20 is untagged to devices such as my server.
Non-VLAN 70 VMs will be able to access VLAN 70 traffic but not vice versa. I am okay with this as I trust my home VMs.
I hope you enjoyed reading as much as I enjoyed setting this up.
When I first started using Proxmox one thing I wanted to understand was the schedule grammar for backups.
Most of my backups aren’t handled in Proxmox but I did want a quick way of backing up my Minecraft server and as I had a slow 1TB disk attached to Proxmox I thought it worth trying.
When backing up its worth observing the 3-2-1 rule. 3 backups, 2 different media, 1 copy offsite. This backup wasn’t just about retaining data in case of loss, it is to facilitate rollbacks in case of irreversible damage or corruption to the server, or a dodgy configuration change.
Because I wanted lots of points in time to roll back to, I used Proxmox over OpenMediaVault, my usual go-to.
Setting Proxmox Backups
Proxmox handles backups from the Datacenter level, in the proxmox administration dashboard on the left hand side, select Datacenter, then click on the Backup tab.
From the backup tab you should see the backups that have been scheduled. Here we can see my minecraft backup jobs loaded.
I found the job schedule difficult to understand when the next few occur. I found through the documentation that you can check the backup iterations through systemd-analyze.
Checking Proxmox Backup Schedules
The easiest way to check your backup schedule is by using the schedule simulator on the far right of the backup configuration area.
If you want to look ahead at proxmox backups to see if you have the right schedule set up, you can also use the command below, replacing the last part of the command with your desired schedule in the shell prompt.
This is because backups work through a version of systemd time specification.
The screenshot above is in Ubuntu’s Terminal but you can run it in the shell on the Promox dashboard directly.
You can check the time of the next backup by altering the iterations argument as required. Once you’ve got the schedule as you need, alter your job (or make a new one).
Make sure to set the retention period correctly, if you specify a retention period in weeks, only the latest backup that week will be kept.
One change I made to the schedule was
in the screenshot to keep 24 hours of backups (limited to the timings of my schedule) and lower the fidelity of backups to a weekly basis after 24 hours to reduce storage consumption. See the documentation as it’s explained better there.
Channel 4’s “The Undeclared War” is a TV Show about a third-party country undermining UK democracy by disrupting UK networks through cyber-attacks. The protagonist is an intern who has a front-row seat to the ordeal and the show is set inside GCHQ, at least that is what I have seen from the first two episodes. I’ll write up more when they are released.
Here is a breakdown of all of the techniques used in the show. It is clear the writers took at least some inspiration from actual real-world scenarios but then bent the rules or changed some aspects to fit the narrative of the episode, which makes the episode a little hard to watch.
The Undeclared War is an inside look at an attack on British internet infrastructure and the inner workings of GCHQ.
The Undeclared War Episode 1
The episode starts out in a fairground, analogous to hacking, as becomes clear when shots of Saara (main character) are interspersed with her sitting in a classroom playing against other hackers.
This is a reference to a game in hacker culture called a CTF or Capture the Flag. A Capture the Flag (CTF) is a popular way of introducing or testing a hacker’s ability, so in that sense at least the show got it right! CTFs are usually a social event and often very competitive, a good start to the first episode.
There are also some more references for the keen viewer, at one point Saara pulls out a hammer and starts knocking on bricks on a wall, this is similar to port knocking, a technique of security through obscurity whereby a system will not open a port to allow access to an application without first having a client send packets to a network connected device in a specific sequence across various port numbers.
After Saara is done knocking the bricks with a hammer, she is able to remove a brick (or the system opens a port) to view the valuable information inside.
It’s not clear how Saara would know the pattern in order to hit the bricks but is possibly something that she would have to capture using packet sniffing or know by other means, such as accessing the computer she is targeting using command line tools such as SSH or even remote desktop.
The show then cuts back to the real world out of the analogy briefly to show the commands Saara is running on her screen, we can see a lot going on but we see references to
at the top.
Meterpreter is a penetration tool used to exploit programs in order to allow a hacker access to a system remotely, which we can see she has used it to dump the password hashes, but in this version of the tool meterpreter has been able to also decrypt the hashes and displays them on screen before she’s cracked them.
Despite this, she then runs a
program (python being a popular programming language) to run a program called
which takes a
file as input probably to crack the hashes, to nitpick it looks like they’ve already been cracked, but perhaps she didn’t have all of the hashes yet.
Python also isn’t a particularly fast language for cracking passwords, a more direct access to the hardware is usually preferred so that the hashes can be computed quicker. Cracking hashes could take days to decades if the password is complex, so every minute advantage in performance counts.
Saara then at the end of the cutscene she runs the command
-sT -vvv -n 18.104.22.168
which seems to be a bit of fat-fingering by Saara, because it’s supposed to be part of the line above or
, but the computer doesn’t seem to mind and dutifully executes the command as though nothing is wrong.
The whole time she seems to be switching between Linux and Windows commands arbitrarily and the computer doesn’t seem to mind, she never switches context from windows or Linux, the commands she entered don’t really make any sense throughout the episode in terms of what is actually possible on one operating system.
We can also see a CVE at the top of the screen, CVE’s are critical vulnerability notices used in various ways to identify and classify exploits in computer programs, it doesn’t really make sense that this would be labelled a “private exploit”, because it’s public by design.
She then also tried to take a copy of the windows box using volume shadow copy, a tool for taking a form of backup, she then decides its time to scan for some open ports, it looks like the command
-sT -vvv -n 22.214.171.124
, a port scanning tool, not that she actually runs
, it just outputs text extremely similar to it.
We can see that
lists the following open ports 445, 139, 53, 443, 80, 6969. 445 and 443 could possibly be SMB or file shares, or a webserver as we can see port 80 is also open, port 53 is for DNS so this box is perhaps also a DNS server, and port 6969 is I’m sure also a real thing, although my skills are lacking a bit when it comes to what this port is for, I don’t think its a real thing but actually a joke for the informed (or otherwise) viewer.
Saara spends the rest of the scene walking around with a tool belt on, clearly focused on the task at hand.
Then she is seen using various commands in the terminal, which are mostly nonsense, but it doesn’t complain at all. Clearly, the directors have turned off the output of the command line if the user types out an erroneous command.
At one point a timer pops up, we can see she runs the command
which prints out some hex. Cool, but even some of the best hackers in the world don’t spend their time reading hex, its like reading a barcode or serial number, it may make sense to computers, but without some real context and understanding of what is going on, its useless to humans.
Working at GCHQ
In the next hackery-type scenes we see, Saara has learned of the attack and starts looking at the code in a program called IDA at about 16 minutes in.
She spends some time scrolling around the code and at one point finds a lot of “garbage” a good way of showing that often tasks like this are tedious and hard to follow. When a compiler compiles a program it strips it of any human-readable comments or friendly function names that are easy to follow, so its often a lot of scrolling, annotating and scrolling to determine what the program does.
This part is a little bit confusing because she is able to identify “garbage” but isn’t able to tell that the code has been obfuscated, obfuscation is a way to make code harder to reverse engineer by having the program perform its function with extra complexity. Saara’s overseer calls the program “some FinFisher thing”, which isn’t really a method for obfuscation but whatever, perhaps I am misinterpreting what he is saying.
Interestingly the malware is also called
in IDA but later called
in the sandbox.
The IDA freeware program allows you to read the program as machine code, somehow Saara doesn’t notice that the program is written to never run the functions or “garbage” she is looking at, despite the fact IDA would have clearly annotated this.
The software reverser Phill says that the garbage is to “confuse the look of the code so the antivirus software won’t recognise it as malware” which sort of makes sense, what he means is that it will change the signature of the program so the antivirus would not be able to detect the program as a known signature or the program behaviours are different than what the antivirus is designed to detect. Again, something Saara would probably know.
She is offered the opportunity to use their test environment, where she incorrectly corrects him about calling it a sandbox.
When she actually runs the program in the sandbox, it errors out and says it can’t run, which the reversing engineer (Phill) says to try to emulate actual user behaviour to see if you can trick it into running, but this is bad advice because they can just reverse the program to determine what is stopping the program from running!
Again, something Saara should understand and already know. “Paste in some word documents, scroll around a bit” lol, once again they have IDA so would be able to determine exactly what is required to cause this behaviour,
Imagine you are reading a book, but you don’t have time to read all of it, and you really just want to know why the main character’s favourite colour is red, you know that on page 20 they say their favourite colour is red, if we try to shoe-horn IDA into this analogy, we would get a direct reference to where the character grew up with a red front door, and that is why their favourite colour is red.
Programs need references in the code to establish behaviours, so when it throws up an error, they can just look through the code, find the error in the code, and trace it back to determine what caused the program to realise it was in a sandbox and prevent it from running, this is basic usage for IDA, its what it is designed to do.
Trying to “Paste in some word documents, scroll around a bit” is like trying to mow a lawn with scissors when you have a lawnmower, ineffective and poor use of the tooling they have.
Its also very unlikely an intern would be vetted enough to have this level of access.
Fear of Attribution
At one point, Danny (Simon Pegg) is reluctant to assign attribution of the malware, this is generally a good call, because it is a technique that advanced persistent threats would use, to implant false clues to assign attribution to a different adversary to throw off investigators. The show talks about Russian bots as well, a real-world issue.
Danny also is chastised for running stressing infrastructure against the network, running this type of test against a production environment during peak hours is a terrible idea.
The hack is also able to take down some parts of the web but leaves others up, this is odd, it may be technically possible however practically all of these systems will themselves have both redundancy and disaster recovery to bring the systems back online, especially products with SLA agreements with their customers.
Many of these systems would be hosted in clouds like AWS or Azure and generally have mechanisms built-in to prevent a global outage based on a single point of failure like a country going down, if a BGP route went down, for example it would not take too long before everything would be re-established through a new route.
At around 28 minutes in, Phill laughs as Saara has reverse-engineered a library saying that “we’ve all done it”, but practically it is almost certainly a good idea, you can probably determine that a program is using a library and probably even check it against a known hash of the library.
The department missing this crucial part of the code by not looking is negligent and certainly something they would have done. They are looking for exactly what she has found, they aren’t looking for something else, so it is odd that they would discount her abilities, its a team effort.
The program opens a URL shortner link
which isn’t a valid top level domain name, to run some code, which could run anything.
I decided it was a good time to learn docker and actually make a project that uses it, so I created ICO Security Trends, a small and unique dashboard which uses the UK ICO published spreadsheets to produce graphs and insight into the data.
I thought I would include some of my findings which are not immediately evident on the BI Dashboard they provide,
UK ICO Incident Security Trends
Categorisation on incidents described as ‘Other non-cyber incident’ has declined from 2019 to 2022. Roughly on average there are 750 incidents a quarter for ‘Other non-cyber incident[s]’, while ‘Other cyber-incidents’ remain fairly constant at around 60 a quarter.
The ‘Other non-cyber incident’ is generally too broad and should potentially be broken down. Insights into trends in this area are potentially being missed.
Ransomware disclosure has increased since 2019, which concides with general industry concensus.
There’s a lot more to it, but I thought I’d get it out there already,
Corporate Networks are highly thought out and well-designed critical business infrastructure that can span many buildings or geographies. The more complex an organisation is, the more expansive and multi-format the network can be.
A Corporate Network will often have an acceptable use policy and may monitor its usage.
Features of a Corporate Network
Many corporate networks utilise additional benefits that home or small business routers usually are not capable of, such as;
Quality of Service or QoS is a layer 3 network technology that can prioritise (or more importantly de-prioritise) traffic by application, such as streaming, game services or file sharing.
Traffic Shaping is a bandwidth management tool to slow long running or high bandwidth downloads to prioritise other activities and ultimately restrict high utilisation on the network by a single client. This is most useful where bandwidth externally is limited.
VPNs (such as L2TP/IPSec or Wireguard) or SSL Tunnels (SSTP) allow corporate networks to link together across global infrastructure, SSL Tunnels can ensure that all data accessed by clients is encrypted by the link itself, so that any HTTP traffic for example must ultimately first travel SSL encrypted to the VPN routing appliance or provider.
VLANs can segregate and isolate riskier traffic as well as limit chatter or prevent sniffing ports. VLANs can also by separated by different subnets or network classes to protect, prioritise or isolate IT infrastructure and management from users. For example many switches have a management VLAN to prevent end-user clients re-configuring or accessing the management portal for the switch itself.
IPv6 is a relatively common new link format however some organisations are starting to implement IPv6 in their infrastructure in preparation for the switchover. Personally I believe this will not be a requirement for some time.
Content filtering and Proxying is used in organisations to protect valuable data and users from compromise and exfiltration. Some organisations require a proxy to reach external services and most implement some form of content filtering, generally for productivity or traffic management purposes.
DNS or Domain Name System servers can provide internal network resources resolvable and recognisable addressing for internal services. Most enterprises use DNS with Active directory through windows server domain controllers so that their Windows clients can take advantage of resolvable network names for windows machines.
Features of a Large Corporate Network
Larger Corporate Networks, ones that can encompass tens of thousands of devices or more could be considered large and may take additional setup, such as;
Load Balancing can be used to balance demand to external or internal services like internal enterprise applications or highly available applications that are business critical.
iBGP Routing or Border Gateway Protocol is usually only required for extremely large networks. Where routing and network policies are likely to change. BGP Routing is generally only required for carrier ISPs or enterprises dealing with internet infrastructure. For customers, due to the premium on network devices, the requirements of the networks used by enterprises and organisations are generally less than BGP can facilitate and BGP is not supported on smaller SOHO (Small Office/Home Office) networks.
Corporate Network Internal Services
DNS or Domain Name Systems
You may wonder how companies and other organisations are able to utilise top-level domain names that are not typically available on the internet, such as
and subdomains for a real domain, such as
is not a real external subdomain.
This is possible through many technologies and can incorporate many aspects to enable additional features like trusted SSL and network-level authentication or windows authentication to provide a relatively normal experience for end-users while being completely inaccessible from external networks.
SSL or Enterprise Trust
Even consumer routers often provide the facility to reserve DHCP addresses and register DNS names and aliases, but providing trusted SSL is accomplished through using either,
A local, trusted SSL certificate signing authority, with the organisations root or supplementary SSL certificate trusted by clients.
A real, actual trusted wildcard SSL certificate for a subdomain of the organisation. This is less common as it would require the same certificate to be on every application.
Network Segmentation and Isolation
A Corporate Network may utilise Network Segmentation to isolate external clients from internal applications or require a VPN to access. In this case, rules on the router allow inter-VLAN communication and routing table rules to allow communication with clients. Some networks may implement a zero-trust architecture in their network access.
Network segmentation restricts access to different services based on rules to help protect an enterprise from enumeration and the exfiltration of data, as access to the network is only possible through opaque rules that will make data transfer over the mediums allowed difficult. For example, access to a public server on a trusted LAN through a direct connection over SSH port 23 may not allow access to web-based interfaces internally such as port 80 or 443 as network rules prevent access, usually by dropping packets.
Many organisations may utilise these technologies in conjunction with an SSL proxy to provide legacy applications with an HTTPS frontend to a web server that is not configured for SSL, as access to the application web server would be restricted to only allow traffic through the proxy.
VPNs and DirectAccess
DirectAccess (similar to an always-on VPN) for Windows or VPN services like L2TP/IPSec enable corporate networks to be spanned over different environments, such as;
Field Engineers who rely on access to internal databases for parts or documents.
Mobile Devices and Tablets for reading email remotely.
Work from Home Deployments (WFH) for office employees who need access to shared drives and groupware.
Satellite or Remote Offices can deploy over the VPN to ensure a consistent experience for employees who travel.
Otherwise insecure environments, like coffee shops can be used as internal services will be accessed over the VPN and not directly over the internet.
Customer Premises where interfaces required on site can be relayed to internal networks at the origin organisation.
VPNs once configured with credentials can be utilised to provide network access as though they were direct clients of the VPN router, which could be placed in a trusted part of the enterprise and provide the typical trust, filtering and proxying required by the organisation configuration.
VPNs can often disconnect at work because there are packets not making it to the VPN provider. The simplest method to rectify this is usually by using an Ethernet cable.
Corporate Network IP Schemes
Unlike a public IP address with a single home network-attached, a corporate network may take advantage of using many IP addresses, networks and physical links to their ISP to provide a more robust and uniform experience to users.
Almost all corporate networks will use VLANs and network subnets to distribute their client environments to isolate services for example, a computer lab in a school vs a teacher network, or an open WiFi network at a restaurant compared to a private one for POS (Point of Sale) terminals.
Generally, most enterprises use the 10.0.0.0/8 IP CDIR block, using different subnets for different kinds of devices. Using the traditional 256 contiguous class C network addresses 192.168.0.0/16 range may not provide enough IP addresses for some larger deployments. (65,536 possible clients).
Corporate Network WiFi
Generally, Corporate Networks used to be a closed ecosystem, where only trusted devices and non-enterprise owned equipment was not present, this is no longer the case.
Rather than use combination Routing and Access Point devices like a home router, enterprises utilise extensive commercial WiFi Access Points that can provide access to numerous clients and can be distributed through the locations the organisation resides, like buildings and restaurants. Using dedicated hardware like Access Points enables the use of specialist configurations, like access point hopping for clients and PoE for easier installation and unification.
Some newer WiFi networks can also provide certificates that can be used in the organisation to access internal resources over SSL.
Applications that are depended on by thousands of users may see peaks or dips in demand during the day and managing the cost of running the infrastructure can be challenging. Scalable applications are applications that are able to increase their resources to serve requests as they come.
What types of Scaling are there?
There are two basic examples of scaling for applications,
Vertical Scaling, where an application’s resources are increased due to demand, such as increasing the RAM available to an application host. Vertical Scaling is also sometimes referred to as scale-up.
Horizontal Scaling, where an application is spread across different nodes in a cluster. This is most appropriate for applications that require a lot of resources. Horizontal Scaling is also referred to as scale-out.
Scalable applications have many uses, including;
Allowing cost reduction during low utilisation by scaling down clusters or pods serving the application.
Improving quality of service during peak load times by scaling up horizontally or vertically by utilising resources made available to the application autoscaler.This will ensure that your application always has the right amount of capacity to handle the current traffic demand.
Faster processing as features can use the optimal storage technology for data and information.
Best Practice for Scalable Applications
Applications usually are best compartmentalised into components for both design and maintainability, monolithic architectures for code bases and applications have caused application elements to become obscure and entrenched, using a more distributed approach from the start can reduce cost and technical debt as components can be re-written or swapped out.
Splitting transient application functions in to their own libraries or APIs can allow the front end and back end to operate independently, this can allow processes that take time to be acted on based on events rather than cause waiting or processing as a batch.
Data storage should be independent to the service that owns the data and therefore should use the best storage type and mechanisms available such as caching or streams of data. Slow returns should be accounted for and independent from the user interface.
Applications should not be strained by resources, whenever possible applications should perform the same for every user or function regardless of workload or work-type. Rather than wait for functions to complete, have a common goal of eventual data integrity.
When implementing concepts like micro-services you should endeavour to ensure standard practices like a common set of languages or behaviours to improve maintainability.
Complexity can sometimes be harder to manage than an equivalent monolithic application, even though each function should be simpler.
I recovered my data from AWS S3 and all I got was this lousy bill.
Aidan – Alternate Headline.
One of my hard drives failed, I thought I’d try to recover the valuable 400GB using ddrescue, it sort of worked.
Restoring from S3 is expensive £27.53 for ~400GB
A week or so ago I realised that my hard drive was on the way out, its been on for almost 27,000 hours according to the SMART data. I first noticed when the PC was loading into check disk after every reboot. It took me about 3 reboots to decide something was up and I used Crystal Disk Mark to check the disk and sure enough it was reporting ‘Bad’. So I ordered 2*6TB drives and thought I’d better have a go at moving the data and making sure my backups were up to date.
For my backups, I use cloudberry backup (now called something else) which is an encryptable cloud backup solution which is compatible with Amazon’s S3. I use the cheapest storage option, S3 Glacier Deep Archive.
I booted in to a persistent live Ubuntu 20 environment and installed ddrescue, ddrescueview and ddrescue-gui. I found that the tools worked well but took way to long for the drive, you can see in the remaining time section of ddrescue-gui it would have taken an estimated 60 days to recover the data at the fastest setting.
Making DDRESCUE Faster
To make ddrescue faster I found it was best to watch the drive speed in ddrescue-gui and then I scrapped it over the command line for a faster experience.
In the end I used these commands, make sure to replace the drives with your setup and the minimum read rate to one your drive is comfortable with. For the first command, I stopped it at around 90 percent of the way through the drive and swapped it for the second one.
# First run to cover myself in case the drive died more seriously.
sudo ddrescue -f --reopen-on-error --min-read-rate=8000 /dev/sdd2 /dev/sdc1 /home/ubuntu/Documents/log1.log
# Lots of Passes to try to recover slow sections.
sudo ddrescue -f --reopen-on-error --retry-passes=5 /dev/sdd2 /dev/sdc1 /home/ubuntu/Documents/log1.log
Although this really seems like a your mileage may vary environment depending on the type of failure your drive has.
If you do end up using ddrescue-gui at least to begin with, you can use the log file to get you a command to start off with. Make sure to read the manual pages for ddrescue to determine the best command for you.
Here is an example of one of my outputs (.log files),
You can of course view this data using ddrescueview.
After a week and a bit, I decided to stop the experiment and see what had been recovered. ddrescueview looked like this,
ddrescue was able to recover about 90.83% of the ntfs partition, enough to mount the drive and view the data. It contained many of my important personal files and more importantly images and home video. The actual used space on the drive was only ~700GB, with around ~450GB of data that was valuable to me.
When I opened the personal photos and videos, I found the results to be quite poor, there were glitches in them, sometimes files had no actual data in them, sometimes they had stripes and lines in the image, because of the spread of the failure across the partition blocks, the data was basically a really poor copy with a lot of holes.
I decided that it was best not to continue the recovery with ddrescue and instead restore from backup, the age of the backup was exactly 1 month prior to the failure to the day, so no real loss. However only the data that I truly cared about was backed up. So stuff like my VMware ISO files and downloads folder were lost and unrecoverable.
Downloading from AWS S3 Glacier Deep Archive
Using Cloudberry I made a restore plan and recovered the data using the slowest SLA at 3-5 days, which by sods law took the full amount of time to process and then some, because I put in the wrong decryption password and needed to re-request the data.
Anyway here is the bill, £27.53
The killer was the data transfer fees out of London, at a cost of $0.09/GB ($28.37).
And with that, all of my data was re-recovered, this time without corruption.
Learnings and Final Thoughts
Although AWS S3 is a valid backup option, its expensive to recover from. I already pay roughly $1/mo for ~400GB (315GB compressed). For larger recovery this would be prohibitively expensive, multi-terabyte or whole disk backups would require compression.
Physical damage to a hard drive is essentially game over, your data is lost. For best results have redundancy. This is the only reason I am thankful for S3, it was my only solution to recover my data. A local backup would have been much cheaper and faster to recover.
The two new 6TB drives run in a Windows storage spaces two-way mirror pool.
Its been a good while now and Flashcard Club is well on its way to a functional product. There has been some progress on features, notably the inclusion of Google OAuth2 through Laravel Socialite.
Logging in through your Google Account should greatly increase the speed of adoption for new users and improve retention and user acquisition.
With that in mind, Flashcard Club has been online now for about 3 months and has yet to have a single user. I believe this is mostly due to myself not promoting the product which I will feel more comfortable doing once the site is ready, however it is currently somewhat usable. To that I have made improvements in the homepage, mostly bringing it up to paces with a call to action and have moved the changelog to an FAQ page, which I may change later as I do not love the name.
Users can now login and link their Google Account to Flashcard Club.
The Landing page has had a massive makeover and most of the content is different now.
The changelog has been moved to the FAQ page.
Test and Study Mode
There is now a chart to plot test performance per set.
Test mode now has additional functionality like gold highlight on completion of the test.
Test Mode now has a summary.
Google Sign in (Federated identity) Completed
Terms of service
FAQ Page Somewhat Completed
Markdown User Guide
Flashcard User Guide
Front page needs work Not complete but looks a lot better.
Cramming mode that removes cards previously marked “Correct”.
Part of Improving the site in the next round will also be improvements to the mobile aspects of the site as most users will likely be on mobile devices.
I have also been ignoring the fact there is currently no export option available to users.