Adding a Drizzle Plugin

I joined about 50 others including a number of core MySQL developers and MySQL community members today for the 2009 Drizzle developers day at Sun Microsystems Santa Clara campus.

In addition to a number of presentations and various group discussions most of my individual hacking time was under the guidance of Drizzle team developer Stewart Smith were Patrick Galbraith and myself started the porting of Patrick’s memcached UDF functions for MySQL .

Leveraging some existing Drizzle plugin’s such as CRC32() and UNCOMPRESS() we were easily able to navigate the src/plugin/memcached directory plug.in, Makefile.am and drizzle_declare_plugin definition in the new get.cc to get a working stub ‘Hello World Example';

plug.in

$ more plug.in
DRIZZLE_PLUGIN(memcached,[memcached UDF],
        [UDF Plugin for memcached])
DRIZZLE_PLUGIN_STATIC(memcached,   [libmemcachedudf.a])
DRIZZLE_PLUGIN_MANDATORY(memcached)  dnl Default
DRIZZLE_PLUGIN_DYNAMIC(memcached,   [libmemcachedudf.la])
Makefile.am
$ more Makefile.am
# Copyright (C) 2006 MySQL AB
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA

EXTRA_LTLIBRARIES =	libmemcachedudf.la
pkgplugin_LTLIBRARIES =	@plugin_memcached_shared_target@
libmemcachedudf_la_LDFLAGS =	-module -avoid-version -rpath $(pkgplugindir)
libmemcachedudf_la_LIBADD =		$(LIBZ)
libmemcachedudf_la_CPPFLAGS=	$(AM_CPPFLAGS) -DDRIZZLE_DYNAMIC_PLUGIN
libmemcachedudf_la_SOURCES =	get.cc


EXTRA_LIBRARIES =	libmemcachedudf.a
noinst_LIBRARIES =	@plugin_memcached_static_target@
libmemcachedudf_a_SOURCES=	$(libmemcachedudf_la_SOURCES)
$ more get.cc
/* Copyright (C) 2009 Patrick Galbraith, Ronald Bradford
...
*/
#include <drizzled/server_includes.h>
#include <drizzled/sql_udf.h>
#include <drizzled/item/func.h>
#include <drizzled/function/str/strfunc.h>

#include <stdio.h>
#include <libmemcached/memcached.h>

using namespace std;

/* memc_get */
class Item_funcmemc_get : public Item_str_func
{
public:
  Item_funcmemc_get() : Item_str_func() {}
  const char *func_name() const { return "memc_get"; }
  bool check_argument_count(int n) { return (n==1); }
  String *val_str(String*);
  void fix_length_and_dec() {
    max_length=32;
    args[0]->collation.set(
      get_charset_by_csname(args[0]->collation.collation->csname,
                            MY_CS_BINSORT), DERIVATION_COERCIBLE);
  }

};


String *Item_funcmemc_get::val_str(String *str)
{
  assert(fixed == 1);
  String * sptr= args[0]->val_str(str);
  str->set_charset(&my_charset_bin);
  if (sptr)
  {
    null_value=0;
    str->set("hello memcached test", 20,system_charset_info);
    return str;
  }
  null_value=1;
  return 0;
}


Create_function<item_funcmemc_get> memc_get_factory(string("memc_get"));

static int memcached_plugin_init(PluginRegistry &registry)
{
  registry.add(&memc_get_factory);
  return 0;
}

drizzle_declare_plugin(memcached)
{
  "memcached",
  "0.1",
  "Patrick Galbraith, Ronald Bradford",
  "memcached plugin",
  PLUGIN_LICENSE_GPL,
  memcached_plugin_init, /* Plugin Init */
  NULL,   /* Plugin Deinit */
  NULL,   /* status variables */
  NULL,   /* system variables */
  NULL    /* config options */
}
drizzle_declare_plugin_end;
</item_funcmemc_get>
Tagged with: Databases Drizzle General MySQL Open Source

Related Posts

More CPUs or Newer CPUs

In a CPU-bound database workload, regardless of price, would you scale-up or scale-new? What if price was the driving factor, would you scale-up or scale-new? I am using as a baseline the first available AWS Graviton2 processor for RDS (r6g).

Read more

An Interesting Artifact with AWS RDS Aurora Storage

As part of using public datasets with my own Benchmarking Suite I wanted upsize a dataset for larger volume testing. I have always used the INFORMATION_SCHEMA.TABLES data_length and index_length columns as a sufficiently accurate measurement for actual disk space used.

Read more

How long does it take the ReadySet cache to warm up?

During my setup of benchmarking I run a quick test-sysbench script to ensure my configuration is right before running an hour+ duration test. When pointing to a Readyset cache where I have cached the 5 queries used in the sysbench test, but I have not run any execution of the SQL, throughput went up 10x in 5 seconds.

Read more