Deduplicating Large Data With Rails

04/12/2017
Share

Storing and retrieving large chunks of data from your database can be tricky if it’s not done correctly. What happens when you want to store a relatively large document or body of text in your database but that large chunk of text is likely to be identical for thousands of new records? You’re faced with the possibility of storing gigabytes of duplicated data that really doesn’t need to be there.

At Vue Pilot, we had this problem a lot. Thousands of duplicate records, with unique IDs but identical content just occupying disk space. It was pretty clear this wasn’t an optimal solution.

Here’s the scenario, say you have a piece of software that generates a log file on installation with some key information you would like to track and archive. You push this log file back to your server via an API endpoint and store it in the database. The log file is roughly 100KB in file size, for every 100,000 installs that’s coming close to 10GB in storage.

The problem is 80% of these log files are identical, they contain the same information but we treat and store them as though they were unique, wasting disk space and blowing out our table size and tuple count with an unnecessary number of rows.

We all know disk space is cheap, but processing power isn’t, large tables cause large indexes which in turn cause slower queries, require more CPU cycles and mean you need to be more careful about writing your future SQL queries. If you query on a non-indexed field you can trigger a full table scan and your app takes a big performance hit in the process.

People love to say “storage is cheap! store everything!” but moving, loading and managing 100GB back up files is lot more difficult and time consuming than 20GB backup files, so it’s worth optimizing where possible.

So how do we optimize this? We’re going to create three tables, The user table, one for the log files and one that acts as a join table between the users and logs. Instead of having a direct relation between the Log and the User, there will be a relation from User to the UserLog and from the UserLog to the Log.

Instead of just saving the log file when we receive it, we will instead hash the contents and perform a look up on this hash in the database to see if we have already seen this exact block of data before. If we get a match on the hash we will instead use the id of that record and throw away the data. This means that if 1000 people generate the exact same log output, we will still only store it once.

Lets have a look at how to set this up

class CreateLog < ActiveRecord::Migration
  def change
    create_table :log do |t|
      t.string :checksum
      t.text :data
    end

    create_table :user_log do |t|
      t.integer :log_id
      t.integer :user_id
    end

    create_table :users do |t|
      t.string :name
    end

    add_index :logs, :checksum
    add_index :user_logs, :user_id
    add_index :user_logs, :log_id
  end
end

Ok so we have the base tables we need to deduplicate our data. Now lets see how we would store the log files. (for the sake of a cleaner read, i’m going to omit all the usual boilerplate code you would find in a normal application like validation and authentication and get to the point)

Assuming that we are receiving the logs from an API endpoint somewhere and that we just want to store them for later inspection and return a status code, here is a simple way we could implement it.

class LogController < ApplicationController
  def create
    log = Log.store(params[:log_data])
    user_log = UserLog.create(log_id: log.id, user_id: current_user.id)
    render status: 201
  end
end
class User < ActiveRecord::Base
  has_many :logs, through: :user_logs
  has_many :user_logs
end
class UserLog < ActiveRecord::Base
  belongs_to :user
  belongs_to :log
end
class Log < ActiveRecord::Base
  has_many :user_logs

  def store(data)
    key = Digest::MD5.hexdigest(data)
    log = Log.find_by_checksum(key)
    if log.nil?
      log = Log.new(data: data, checksum: key)
      Log.transaction(requires_new: true) do
        begin
          log.save!
        rescue ActiveRecord::RecordNotUnique => e
          raise ActiveRecord::Rollback
        end
      end
    end
    log
  end
end

It’s as simple as that. The Log.store method simply uses Digest::MD5.hexdigest to calculate an MD5 checksum of the log data to which on the next line we then attempt to look up if the record exists. If it does not, we create a new log file and save it, being sure to wrap the save operation in a transaction just incase another log file with the same content is being stored at the same time.

The User model has a through: :user_logs relation to the Log model which means when you call User.logs it performs log look up via the smaller join table which just consists of ids in order to find relevant logs. This essentially resolves to a query that looks something like

SELECT * FROM logs 
INNER JOIN user_logs ON logs.id = user_logs.log_id 
WHERE user_logs.user_id = <user_id>

As you can see from the following console output, trying to store the same data repeatedly will result in the same record being returned and not repeatedly stored.

2.3.1 :004 > log = "some log file text here"
2.3.1 :005 > Log.store(log).id
 => 4 
2.3.1 :007 > Log.store(log).id
 => 4 
2.3.1 :008 > Log.store(log).id
 => 4 
2.3.1 :009 > Log.store(log).id
 => 4 
2.3.1 :010 > new_log = "something else"
2.3.1 :011 > Log.store(new_log).id
 => 5