Showing posts with label MySQL. Show all posts
Showing posts with label MySQL. Show all posts

Wednesday, March 18, 2015

Fixing Error: Could not find a suitable provider in Puppet

I'm quite new to Puppet and I had a Puppet script which configures a MySQL database working fine on a Puppet learning VM on VirtualBox. This issue happened when I installed a and setup Puppet on a server of my own. I kept seeing the following error and it was driving me crazy for some time.

[hesxxxxxxx@xxxxxxpocx ~]$ sudo puppet apply --verbose --noop /etc/puppet/manifests/site.pp 
Info: Loading facts
Info: Loading facts
Info: Loading facts
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Notice: Compiled catalog for xxxxxxxxxxxxxxxxxxx in environment production in 0.99 seconds
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
   (at /usr/lib/ruby/site_ruby/1.8/puppet/type/package.rb:430:in `default')
Info: Applying configuration version '1426611197'
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Mysql::Server::Install/Exec[mysql_install_db]/returns: current_value notrun, should be 0 (noop)
Notice: Class[Mysql::Server::Install]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: current_value {md5}8ace886bbe7e274448bc8bea16d3ead6, should be {md5}d0d209eb5ed544658b3f1a72274bc3ed (noop)
Notice: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]/ensure: current_value absent, should be directory (noop)
Notice: Class[Mysql::Server::Config]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: current_value stopped, should be running (noop)
Info: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Unscheduling refresh on Service[mysqld]
Notice: /Stage[main]/Mysql::Server::Service/File[/var/log/mysqld.log]/ensure: current_value absent, should be present (noop)
Notice: Class[Mysql::Server::Service]: Would have triggered 'refresh' from 2 events
Error: Could not prefetch mysql_grant provider 'mysql': Command mysql is missing
Notice: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Dependency Mysql_user[m_user@localhost] has failures: true
Warning: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Skipping because of failed dependencies
Notice: Stage[main]: Would have triggered 'refresh' from 3 events
Error: Could not find a suitable provider for mysql_user
Error: Could not find a suitable provider for mysql_database

The issue was I was running puppet with —noop mode. When my Puppet tries to configure the mysql setup it gives errors because it didn’t have a mysql setup to configure since I had —noop. Removing this did the trick.

Although this is trivial, I thought of blogging this because someone might find this useful when facing the same issue as I did.


Monday, February 24, 2014

Fixing mysql replication errors in a master/slave setup

If you have a mysql master/slave replication setup and have run into replication errors, you can follow the below instructions to fix up the replication break and sync up the data.
1) Stop mysql on the slave.
service mysql stop
2) Login to the master.

3) Lock the master by running the following command
mysql> flush tables with read lock;
NOTE: No data can be written at this time to the master, so be quick in the following steps if your site is in production. It is important that you do not close mysql or this running ssh terminal. Login to a separate ssh terminal for the next part.

4) Now, in a separate ssh screen, type the following command from the master mysql server.
rsync -varPe ssh /var/lib/mysql root@IP_ADDRESS:/var/lib/ —delete-after
5) After this is done, now you must get the binlog and position from the master before you unlock the tables. Run the following command on master
mysql> show master status\G;
Then you’ll get some information on master status. You need the file and the position because that’s what you’re going to use on the slave in a moment. See step 10 on how this information is used, but please do not skip to step 10.

6) Unlock the master. It does not need to be locked now that you have the copy of the data and the log position.
mysql> unlock tables;
7) Login to the slave now via ssh. First remove the two files : /var/lib/mysql/master.info and /var/lib/mysql/relay-log.info

8) Start mysql on the slave.
service mysqld start
9) Immediately login to mysql and stop slave
mysql> slave stop; 
10) Now, you have to run the following query filling in the information from the show master status above (Step 5.)
mysql> CHANGE MASTER TO MASTER_HOST=MASTER_IP,
mysql> MASTER_USER=‘replicate’, MASTER_PASSWORD=‘replicate’,
mysql> MASTER_LOG_FILE=‘mysql-bin.000001’, MASTER_LOG_POS=1234512351;
11) Now start the slave.
mysql > slave start
12) Check slave status.
mysql> show slave status\G;

Monday, November 18, 2013

Techniques to backup and restore mysql databases - A comparison

When I started working at Digital Mediat Solutions, one of the biggest problems that we had was the runtime of daily mysql backup scripts. They were taking 8 hours to complete. The first task I had was to fix the performance of the backups. The existing scripts were based on mysqldump. 

I compiled a list of options to look into by referring the web. I'm sharing this, so that someone else might find it useful. 

Out of the following options, the most feasible (economically, disk space wise and performance wise) option for us, was to use mysqlhotcopy. Now the backup script I have written based on  mysqlhotcopy, completes in 1 hours time, which is a significant imporvement. 


Options

1) MySQL enterprise backup
pros
  • incremental backup
  • compressed backup
  • Backing up the physical database files makes restore much faster than logical techniques such as the mysqldump command.
  • InnoDB tables are copied using a hot backup mechanism. (Ideally, the InnoDB tables should represent a substantial majority of the data.)
  • Tables from other storage engines are copied using a warm backup mechanism.


cons
  • enterprise licensed
  • Minimalistic license per 1 mysql server instance per 1 year is XXXXUSD.
  • The cost will grow as the instances grow and the running time exceeds year by year.
  • It even exceeds our current budget for underlying hardware.
  • Therefore, I don’t think it’s feasible for our budget and IMV it’s not needed for the current deployment.


2) MySQL dumps with diffs
pros
  • Saves space in the backed up device as we are backing up the diff.   
  • It won't result in network traffic as the only time the full dump is transferred, is when it’s run for the first time.


cons
  • We are looking into stopping the overhead generated by the mysql dump command.
  • Since, we are using the same command over and over again, in this deployment, this wont satisfuy our need. Therefore, IMV this is not the option for us.


3) Make incremental backups by enabling the binary logs.
pros
  • Used to set up replication
  • Used for restore operations
  • The above two pros outweigh the con.
  • Can do pinpoint restorations
  • Can tarball the existing logs and ship to the backup location. Just a matter of file zipping operation.


cons
  • Makes performance slightly slower.


4) Suggestions to improve current backup scripts
  • Improve the cron job (which gets triggered on low usage period) to perform the slaving task to avoid high CPU utilization.


5) Should investigate and run some performance matrices on mysql-parallel-dump and mk-parallel-dump (http://www.maatkit.org/) utilities.
  • mk-parallel-dump is deprecated now. So, it not feasible.
  • TODO: Talk to Matt regarding:
    • Setting up a dev env, so that I could test these stuff. I don’t need any new fancy hardware for these. Just give me a PC and I’ll configure it with linux and try these out. We don’t have to buy new hardware.
    • Need to test these in a dev env. So, that we could roll these out to a production environment.
    • Need to have a mock server (which have kind of the same traffic as the real servers), so that we could test and verify things properly


6) Remove the current scripts and use a tool specifically written for this and see how it performs.


7) Update the current scripts with --quick option which avoids buffering of large tables and does row by row backups. Resulting in faster restores. This will be a minor tweak to the current scripts.


8) Use of mysqlhotcopy instead of mysquldump
  • It uses FLUSH TABLES, LOCK TABLES, and cp or scp to make a database backup.
  • It is a fast way to make a backup of the database or single tables
  • but it can be run only on the same machine where the database directories are located.
  • mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables.
  • Since we are using MyISAM, this wont be a problem for us.


9) Use SVN to backup databases.
  • This will help to have revisions.
  • I don’t think we need such a mechanism for us.


10) If replication to a slave isn't an option, we could leverage the filesystem, depending on the OS we are using,