Determining consulting rates

It can be hard sometimes, particularly with startups to determine what to charge. I have tried various models over the years from nothing, to greatly reduced, to full-price. Nothing works well.

As one of the top consultants in MySQL, I kept my rates down as an individual to compete competitively with the 3 or 4 other companies world wide that provide relative services, this in the end hurt my bottom line.

I charge a premium rate that matches my skills, expertise and competitors. I charge that for all customers, large, small, old and new. When the value of my work in performance tuning, disaster management, scalability and architecture is offset by the loss of potential or future business it is not difficult to justify a reasonable rate. I also continue to speak extensively, write and publish materials that provides detailed practical knowledge for organizations and individuals that can invest the time, but not the money.

I am still shocked when large established companies want a discount, just last week for a few hours work a company wanted 33% off.

An extract from “3 Things Entrepreneurs Should Never Depend On When Starting A Company” provides a great re-enforcement about what is appropriate pricing.

_Fearful Pricing

When I started my business, I undercharged for my service. I didn’t have the confidence to ask for a decent price, and I thought I had to have the lowest price in order to get business.

What did these practices get me? Low profits and poor cash flow.

In order to survive as a startup—both financially and mentally—it’s crucial that you make sure you’re receiving maximum reward for your maxed out efforts. If you don’t see the true value in your business, how do you expect your clients to do so? Your work is worth it; adjust your prices accordingly.

Read more: Business Insider

Tagged with: Uncategorized

Related Posts

More CPUs or Newer CPUs

In a CPU-bound database workload, regardless of price, would you scale-up or scale-new? What if price was the driving factor, would you scale-up or scale-new? I am using as a baseline the first available AWS Graviton2 processor for RDS (r6g).

Read more

An Interesting Artifact with AWS RDS Aurora Storage

As part of using public datasets with my own Benchmarking Suite I wanted upsize a dataset for larger volume testing. I have always used the INFORMATION_SCHEMA.TABLES data_length and index_length columns as a sufficiently accurate measurement for actual disk space used.

Read more

How long does it take the ReadySet cache to warm up?

During my setup of benchmarking I run a quick test-sysbench script to ensure my configuration is right before running an hour+ duration test. When pointing to a Readyset cache where I have cached the 5 queries used in the sysbench test, but I have not run any execution of the SQL, throughput went up 10x in 5 seconds.

Read more