Data recovery services: What is it and when will you make use of it?

Have you ever had a PC that just would not start up? Or a hard drive that is just not read by any machine?

Did an external hard drive stop responding, become unsearchable and/or start making funny clicking sounds?

Have you ever formatted a drive (albeit hard drive or flash drive (memory stick)), only to realise you needed the data on that drive and didn’t make a backup thereof before clicking on the OK button?

These are some of the scenarios wherein you could conceivably make use of data recovery services.

But what can data recovery services do for you, you may ask.

First off, please always bear on mind that while data recovery firms will always do everything in their power to recover your precious data, recovery is never 100% guaranteed, as there are circumstances that may make this impossible, e.g.

> drive struck by lightning or subject to continuous power surges;
> subjected to very strong magnetic fields;
> machine dropped or knocked over while the drives are working hard, where the disks are so badly damaged (surface scraped, disks bent or broken, spindle bent or snapped, read/write head penetrates the disks) that recovery effectively becomes impossible;

Data recovery is the process whereby data is recovered from a damaged or otherwise inaccessible drive.

there are two methods employed, viz.

> soft recovery, where recovery is possible by means of computer software;
> when soft recovery fails, one can perform hard recovery, where the hard drive is dismantled (in a Class 100 or less Clean Room) and, depending on the damage encountered, either replacing components within the hard drive where possible, or, as a last resort, the disks removed and placed in a specialised piece of equipment (a disk reader), which is then used to get the data from said disks (however, this is a labour intensive operation and is, as such, the more expensive option, with limited chance of success)

But what is the biggest causes of failure in hard drives?

> Severe viral infections, which may damage the boot sector, effectively rendering the drive inaccessible;
> Not performing regular cleanup and maintenance of hard drives (including, but not limited to, defragmentation (which can be scheduled on most operating systems), disk cleanup (removal of temporary files or unused / unusable files, to save space), or filling a drive beyond the recommended maximum of 75-80% of capacity (this additional space is what gets used for maintenance of said drive by the Operating System)
> Abuse of the drive, e.g. using a desktop / laptop drive as a server (with extremely high read and write activity, pushing the drive beyond its actual designed capabilities);
> moving the machine while the drives are spinning (refer below for an analogy of the workings of a hard drive and possible causes of damage)

Now, the question can be asked about how to avoid said issues.
To be honest, drive failure is inevitable, it happens to everyone at one time or another.
Yet there is a way to minimise the impact of the failure:

> Ensure you have an Uninteruptable Power Supply (with Surge Protection) connected between the power socket and your PC;
> Keep backups of your precious data, in a reliable storage (e.g. an External Hard Drive), not the drive on which your data is currently stored (remember to regularly check the quality of the backed up data). There are several modestly priced yet reliable backup tools available on the market;
> Do not manouver, move or shake your PC or Laptop while the hard drives are spinning (i.e. while it is on) – that includes walking around with the laptop on (if you need to do this, obtain a machine with Solid State Drives)

The promised analogy:

Consider the read / write heads of the hard drive as a Jumbo 747 Super.

Consider the palettes of the drive as the ground.

Now imagine the jumbo flying at Mach 5 three inches (approximately 75 mm) above the ground

On this scale,

> a wave of white light is approx 10 inches (~250 mm);

> a particle of smoke is a 1/2 metre diameter boulder;

> a fingerprint is a metre high wall;

> a dust particle is the size of a house;

impact with any of these is sure to cause damage.

However, the most frequest cause of hardware damage (i.e. physical damage to the disks and read / write heads) is movement. While the disks are spinning at full speed (between roughly 5400 rpm and 7200 rpm (or even more (up to approx 11.000 rpm) on the newer hard drives), depending on the make of the drive), any movement will cause the pallettes of the disk to wobble. On the scale used above, the wobble can be anything between 4 inches (~100 mm) and 8 inches (~200 mm), i.e. the palette will hit the read / write head, leaving bad sectors. Enough of these impacts will render the drive (and the read / write head) totally useless, with all data effectively lost.

Posted in Data Recovery | Tagged | Comments Off on Data recovery services: What is it and when will you make use of it?

Our current projects

We have several projects in the pipeline, covering a wide audience

These include:

> My Secret Secret, a simple encryption / decryption tool, making use of a complex encryption algorithm, with a dynamic key length, determined by the user.

> Biblios Personae, a library application, making it possible to manage who you lend what to and how long it took to return it as well as the condidtion it was in when it was returned. Using this, you can manage if you want to lend another item to them (albeit book, cd, dvd or any other lendable item )

> My Secret Secret Maxi, based on My Secret Secret, with one major difference: it can encrypt or decrypt any readable and writeable document, in place (e.g. a word document being send to a specific person that you do not want to fall into the wrong hands (e.g. competitor))

Posted in New developments | Comments Off on Our current projects


Keep watching this space. Our products will soon be available for purchase and download.

Posted in Company News and Events | Comments Off on Future…


Our web page is launched. Now we are ready to face the world.

Posted in Company News and Events | Comments Off on Launched!

Why Normalise Database Tables?

Many GUI and web developers I have spoken to, do not fully understand the value to be gained from normalising their database designs. It is, after all, easier to build the tables to look exactly like the form or web page they are building.

The question that usually arises when we as database “specialists” question them is: but what is wrong with that design? my code works, doesn’t it?

The obvious answer to this would be “yes, your code works”, but does it really?

Designing tables in this manner will eventually have serious drawbacks, including, but not limited to, redundancy, wasted space and loss of data integrity.

As an example, we will use an extract of one table, from a failed company (the reason for it’s eventual failure will become apparent in due course).

Definition of Order Table

Order Table Definition

Now the referenced programmers defend their design, stating that it is normalised, as it does have a primary key, so the rows are uniquely identifyable.

OK, let us look at that statement.

Yes, there is a primary key. Now, does having a primary key make a table Normalised?

The anser is yes, provided it also complies to First Normal Form (1NF) and Second Normal Form (2NF) rules. This table does not comply to 1NF rules, which state that:

> the table is a faithful representation of a relation
> it is free of repeating groups

(see here for a definition of 1NF)

So, How do we go about normalising this data set?

> First, let us eliminate the repeating columns, by placing all these columns in their own table, which will look as follows once complete:

Remove repeating columns

Remove repeating columns

Now, looking at the resulting tables, we can see they comply to 1NF.

Now, let us check that they comply to 2NF, which states:

> Identify a candidate key;
> Ensure those records are supportive of said candidate key, i.e. those that are dependent on the candidate key, with no partial dependencies.

for Orders, a good candidate key would be OrderNo (which we will make into a Primary Key, as it must, of needs, remain unique) and for PartsPerOrder, OrderNo and Part make a good choice for candidate key (we will create a surrogate key (viz. PartPerOrderCode) as the primary key, with a non clustered index covering the candidate key(s) (along with the respective foreign keys)

Thus the structure now complies to 2NF .

(see here for a definition of 2NF)

Now we need to go further. Are there any fields in the tables that do not belong (i.e. do not describe the record in the table)?

The PartPerOrder table still has a field for the Sales Person’s mobile phone. If the salesperson changes their Mobile number, this will need to be updated in each and every order that person was involved in. That could (and quite possibly will) lead to data anomolies.

How do we now eliminate this data?
By applying the rules for Third Normal Form (3NF).

We can ask if this data may already exist in another table. Shouldn’t the salesperson contact information therefore be stored in the Staff table?

Checking the Staff table, we do indeed find a column labelled “MobilePhone”, which should, presuming it has been completed correctly, contain the person’s correct mobile number.

Removing this and using the column in the Staff table gives us the following:

Now there are no redundant data columns in the tables, which gives a much cleaner design (see here for a definition of 3NF).

Is it possible to improve the data?
Yes, we can proceed to the next level of Normalisation.
This next level of normalisation is termed Boyce-Codd Normal Form (BCNF) (also called 3.5 Normal Form (3.5NF) by some theorists).

The aim here is to ask if a column really describes the table and, if not, move it to another table (and, in so doing, probably eliminate existing nullable columns).

In the Staff table, there are columns that do not explicitly describe the staff member as a person, viz. their contact details (Extention and MobilePhone).

We therefore create a new table to carry the staff member’s contact information, and remove them from the Staff table. This could be called StaffContactDetails and will appear as follows:

Boyce-Codd Normal Form

Now there are really no redundant data columns visible, and the number of nullable columns is greatly reduced (see here for a definition of BCNF).

The nullable columns that are left add value where they are, as they do describe the tables and will, eventually, be filled in (e.g. when an order is finalised and paid).

We could, however, normalise this design further, but we need to ask ourselves if this will add value in terms of both storage space and performance (for details of the other levels of normalisation, refer to Fourth Normal Form (4NF), Fifth Normal Form (5NF) and Sixth Normal Form (6NF)).

The programmer would now scream and shout, stating that his code will not work, he cannot represent this “shatterred” data in his front-end, he will have to start all over again, etc., etc. ad nausium.

Whereupon we, as the database “guys” will simply answer: “Haven’t you ever heard of a view or stored procedure?”.

In any event, using our example, why did the company fail?
> what happens when a client has a large order, wherein they request numerous different parts (substantially more that 5)?
– There will be one order generated and paid for for each 5 parts required. If the client required 100 different parts, there will be 20 orders completed, each one have to be settled independently.
– This is expensive and time-consuming, not to mention frustrating.

– Most people will rather go to a competitor, who has a simpler and more streamlined process (with a well designed database), who can proces all parts in one order, with one payment.

When the developer was asked why it was done this way, he replied that management assured him no-one would order more that 5 parts at a time – this points back to:
> not understanding the business nor the environment in which the client operates, and
> poor requirement gathering skills.

Requirement elicitation will be handled in a future post to this blog…..


Posted in Database | Tagged , | Comments Off on Why Normalise Database Tables?

File system storage

I am constantly amazed at how many people believe the only way to change the storage option of a hard drive from FAT or FAT32 to NTFS is by means of formatting the hard drive.

the following  command line (DOS)  should eliminate all those issues (execute (command shell) on any drive other than the drive being changed):

CONVERT <volume> /FS:NTFS [/V] [/CvtArea:filename] [/NoSecurity] [/X]
volume Specifies the drive letter (followed by a colon), mount point, or volume name.
/FS:NTFS    Specifies that the volume will be converted to NTFS.
/V          Specifies that Convert will be run in verbose mode.
/CvtArea:filename Specifies a contiguous file in the root directory
that will be the place holder for NTFS system files.
/NoSecurity Specifies that the security settings on the converted
files and directories allow access by all users.
/X          Forces the volume to dismount first if necessary. All open handles to the volume will not be valid.

Posted in Operating System | Tagged , , , | Comments Off on File system storage

Table Partitioning

Recently presented a discussion at the SA SQL User Group in Bryanston, on the topic of Logical Patitioning Database Tables in MSSQL 2005 & 2008

presentation: TablePartitioning

Posted in Database | Tagged , | 1 Comment