The Physical Hard Drive

The Physical Hard Drive is a patent-pending system to hold data. Technology has advanced to where this is a functional data storage method, and data storage augmentation method. I developed it last year and seem to be way out in front of the competition. It is based on the simple idea of lines, dots, shapes, irregular heights, holes, and variable sized drives to increase potential memory past a tb per inch2. See the Pending Patent in PDF at

An Inch2 has 645,160,000,000,000nm2. I propose an initial standard of 50x50nm “spots”, which avoids the need of complicated lithography until it can be mastered; however, spots are not the only method, just a unit of measurement for now.

There exists several ways to encode data today, from magnetic and optical drives, to the old punchcards. My method utilizes physical lines, dots, and even shapes to achieve a higher potential data density on a given amount of material and can even include colors, textures, elevation differences, empty spaces, holes, and different sizes (and subsections) of drives to vastly increase data density.

While some of those technologies, colors for instance, may not currently be cost effective or may have limited data density for data storage, with advances in technology these will become more cost effective and dense in data holdings. Light, for instance, will eventually be able to produce near full color at 1ùm. This would mean essentially 23 bits of color per ùm2.

Currently, full color appears to be possible only at 100um, though I suspect that 10um shall happen in the next two years. A 10x10x100um model with only 1,000,000 colors exists currently. This is already a considerable increase over what was available in the past. Given the technology level, this with color is capable of 15mb per inch2. At 1ùm this becomes 1.5gb per inch2, but amazingly this can be printed on plastic or a composite in the future; meaning you can make a “deck of 54 two sided cards” which is almost the same size as current hard drives and would hold 1tb for pennies on the dollar.

Since an inch2 has 645,160,000,000,000nm2 and I am proposing a 50nm2 standard, this leaves 258,064,000,000 locations on one side of an inch2 at maximum. This corresponds to 30 gigabytes per inch2 (on one side).

Through a new sub-school of math I call “Advanced Combinations” it becomes possible to use 3D structures. Recent work by Dr. Onur Tokel has made use of infrared lasers able to make structures, lines, tunnels, and dots INSIDE silicon wafers without damaging the surface of the silicon wafer. In theory, by using multiple interior levels it should be possible to get as high as triple the total normal data storage.

However, even without using such a method, we can increase the yield via a variety of methods. Alignments of lines is one potential method, where subsections will have their own alignment compared to other subsections. Additionally, if we have interruptible rays from a central source (aka breakable lines) and multiple central sources (with the locations of these central sources being anywhere) we gain a significant means to encode vast sums of data.

One advancement is the idea of the hole. Ten holes in a 100 bit section costs 10 bits but adds 26 bits. This represents a 16% increase in any existing memory system. Pascal’s Triangle provides the key to understanding this, as the locations of the holes are essential in how the data is encoded. Structural integrity should be the only factor of concern. This data is permanent in nature, such as if you drilled holes in a magnetic drive (thus removing the magnetism of that specific location) then it is not read/writeable, but is forever encoded. A 5-10% increase in data density should be achievable with this technique. In a terabyte of storage this would reflect between 50 to 100 gigabytes of data permanently encoded. This should suffice for a full operating system, all current drivers, and a number of common programs (browsers, map programs, music/movie players, etc.) as desired at time of manufacture.

We can also significantly increase data by just making 4 levels of depth per side of a given wafer. Add to this the new infrared technology where we can place and read data from inside a wafer and could, in theory, triple to quadruple the data storage. It is double with minimal depth differences as-is.

At this juncture, using only proven methods, I believe that 132 gigabytes is possible per inch2 and that as much as 800 gigabytes in theory using methods not yet fully modeled. Advances in technology seem to point to ever rising amounts of data we can store. I suspect the final level will exceed 2tb an inch2.

This technology will initially be useful (due to cleanroom requirements and read/write equipment costs) only for large companies with long-term data backup requirements. This includes the Government (NSA, especially, it would seem), colleges/universities, banks and financial institutions, and very large corporations.

The advantages are huge data storage density and potentially low cost (silicon is far more common than rare earth elements used in magnetic drives), and a safe format for storing data. How does this compare with existing hardware? At $7.60 per area of 7 square inches on a 3inch diameter wafer we have a cost of $9 a terabyte, at its highest, to as low as $1 per a terabyte (source: University Wafer)… excluding the read/write hardware, cleanroom, wafer storage, and other incidental costs.

For potential investors or buyers:
This is an applicable technology. The error rate will be low by using a 50ùm standard. At near 1tb per wafer of 3inch diameter a lot of data can be stored in carry boxes and the cost cannot be beat. Yes, there are other costs involved, but these combined costs should still keep the total costs to under $20 a terabyte. These wafers are also going to be EMP proof, they will have exceedingly long shelf lives, require no electricity when properly stored, and advances in math & manufacturing will continue to make this the most cost effective data storage system in the world.

As an added bonus, my technology allows for increasing current data storage methods by as high as 10%.

Much of the potential is tied up in math models. I will also accept a 3 year consult and assist contract, to help bring this technology to full capabilities. A team of a couple of software developers and a mathematician should make developing the models very possible without undue difficulties.


One thought on “The Physical Hard Drive

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s