Comment affiner notre serveur MySQL?

8

MySQL n'est pas mon truc, mais je dois affiner l'un de nos serveurs.

Voici les exigences / spécifications:

  • Le serveur MySQL n'a qu'une seule base de données significative
  • Nous n'avons qu'un "type" d'application connecté, et pas beaucoup d'instances en même temps y sont connectées: au plus 15. (ces applications sont des bots XMPP)
  • Ces applications ont une entrée / sortie non bloquante, ce qui signifie qu'elles n'attendent jamais sur le serveur de base de données et continuent de traiter les demandes entrantes pendant le traitement des requêtes de base de données. Cela implique que parfois une instance de cette application peut avoir plusieurs (beaucoup!) Connexions au serveur de base de données (spécialement si certaines requêtes sont lentes)
    • Toutes les requêtes utilisent des indices
    • Notre machine hôte exécute uniquement MySQL. Il s'agit d'une instance Xen (@slicehost) avec 2 Go de RAM.
    • Nous utilisons la table InnoDB car nous avons besoin de quelques transactions de base, mais nous pourrions probablement passer à MyISAM si cela avait un réel impact sur les performances.

Comme il est configuré en ce moment, notre serveur MySQL commence lentement à manger toute la mémoire disponible (nous utilisons collectd, voici un graphique). À un moment donné (après quelques jours / semaines), il cesse d'exécuter des requêtes (il s'est arrêté cette nuit pendant 2 heures, et j'ai dû redémarrer le serveur MySQL: voir 2ème image):

(désolé, les nouveaux utilisateurs ne peuvent pas publier d'images, et seulement 1 hyperlien: /)

Voici notre my.cnf actuel

#
# The MySQL database server configuration file.
#
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port        = 3306
socket      = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket      = /var/run/mysqld/mysqld.sock
nice        = 0

[mysqld]
#
# * Basic Settings
#

#
# * IMPORTANT
#   If you make changes to these settings and your system uses apparmor, you may
#   also need to also adjust /etc/apparmor.d/usr.sbin.mysqld.
#

user        = mysql
pid-file            = /var/run/mysqld/mysqld.pid
socket      = /var/run/mysqld/mysqld.sock
port        = 3306
basedir     = /usr
datadir     = /var/lib/mysql
tmpdir      = /tmp
language            = /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
# yann changed this on a friday balbla
#bind-address       = 127.0.0.1
bind-address        = 0.0.0.0
#
# * Fine Tuning
#
key_buffer      = 16M
max_allowed_packet  = 16M
thread_stack        = 128K
thread_cache_size   = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover      = BACKUP
max_connections        = 2000
#table_cache            = 64
#thread_concurrency     = 10
#
# * Query Cache Configuration
#
query_cache_limit       = 1M
query_cache_size        = 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# log       = /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
log_slow_queries    = /var/log/mysql/mysql-slow.log
long_query_time = 3
log-queries-not-using-indexes

#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#       other settings you may need to change.
#server-id      = 1
#log_bin            = /var/log/mysql/mysql-bin.log
expire_logs_days    = 10
max_binlog_size         = 100M
#binlog_do_db       = include_database_name
#binlog_ignore_db   = include_database_name
#
# * BerkeleyDB
#
# Using BerkeleyDB is now discouraged as its support will cease in 5.1.12.
skip-bdb
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# You might want to disable InnoDB to shrink the mysqld process by circa 100MB.
#skip-innodb

# Fine tunig added by JG on 06/03 based on http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
innodb_buffer_pool_size  = 1G
#innodb_log_file_size     = 256M
innodb_log_buffer_size   = 4M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency      = 8
innodb_flush_method            = O_DIRECT
innodb_file_per_table
transaction-isolation          = READ-COMMITTED
innodb_table_locks             = 0         

#
# * Federated
#
# The FEDERATED storage engine is disabled since 5.0.67 by default in the .cnf files
# shipped with MySQL distributions (my-huge.cnf, my-medium.cnf, and so forth).
#
skip-federated
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# For generating SSL certificates I recommend the OpenSSL GUI "tinyca".
#
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem



[mysqldump]
quick
quote-names
max_allowed_packet  = 16M

[mysql]
#no-auto-rehash # faster start of mysql but no tab completition

[isamchk]
key_buffer      = 16M

#
# * NDB Cluster
#
# See /usr/share/doc/mysql-server-*/README.Debian for more information.
#
# The following configuration is read by the NDB Data Nodes (ndbd processes)
# not from the NDB Management Nodes (ndb_mgmd processes).
#
# [MYSQL_CLUSTER]
# ndb-connectstring=127.0.0.1


#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#
!includedir /etc/mysql/conf.d/

Voici un vidage de requêtes lentes:

$ mysqldumpslow /var/log/mysql/mysql-slow.log

Reading mysql slow query log from /var/log/mysql/mysql-slow.log
Count: 5  Time=3689348814741910528.00s (-1s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  SET insert_id=N;
  INSERT IGNORE INTO `feeds` (`url`) VALUES ('S')

Count: 41  Time=1349761761490942720.00s (-1s)  Lock=0.12s (5s)  Rows=253.0 (10373), superfeeder[superfeeder]@localhost
  SHOW GLOBAL STATUS

Count: 25  Time=737869762948382080.00s (-1s)  Lock=0.00s (0s)  Rows=18.1 (452), superfeeder[superfeeder]@[172.21.1.158]
  SELECT `feeds`.* FROM `feeds` WHERE (`fetch_session_id` = 'S')

Count: 12952  Time=1424239042133230.25s (-1s)  Lock=0.00s (1s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  SET insert_id=N;
  INSERT IGNORE INTO `entries` (`chunks`, `time`, `feed_id`, `unique_id`, `link`, `chunk`) VALUES ('S', 'S', N, 'S', 'S', 'S')

Count: 29  Time=656.55s (19040s)  Lock=5.28s (153s)  Rows=0.8 (23), superfeeder[superfeeder]@[172.21.1.175]
  select salt,crypted_password from users where login='S'

Count: 39  Time=505.23s (19704s)  Lock=2.41s (94s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  DELETE FROM `feeds` WHERE (url LIKE 'S')

Count: 2275  Time=502.50s (1143184s)  Lock=3.48s (7922s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `next_fetch` = 'S', `fetch_session_id` = 'S' WHERE (`next_fetch` < 'S') LIMIT N

Count: 1  Time=443.00s (443s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL))

Count: 14  Time=289.43s (4052s)  Lock=0.71s (10s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN ('S','S'))

Count: 2  Time=256.00s (512s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL))

Count: 1  Time=237.00s (237s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN ('S'))

Count: 24  Time=191.58s (4598s)  Lock=1.12s (27s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`id` = 'S')

Count: 5  Time=144.20s (721s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `next_fetch` = 'S' WHERE (`feeds`.`url` IN (NULL,NULL,NULL))

Count: 1  Time=101.00s (101s)  Lock=1.00s (1s)  Rows=1.0 (1), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `users` WHERE (`login` = 'S') LIMIT N

Count: 79  Time=35.51s (2805s)  Lock=2.52s (199s)  Rows=0.2 (12), superfeeder[superfeeder]@[172.21.1.184]
  SELECT `feeds`.id FROM `feeds` WHERE (`feeds`.`url` = BINARY 'S' AND `feeds`.id <> N)  LIMIT N

Count: 1  Time=28.00s (28s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  UPDATE `feeds` SET `last_maintenance_at` = 'S', `updated_at` = 'S' WHERE `id` = N

Count: 51  Time=23.51s (1199s)  Lock=0.12s (6s)  Rows=19.2 (981), superfeeder[superfeeder]@2hosts
  SELECT version FROM schema_migrations

Count: 5  Time=20.60s (103s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  BEGIN

Count: 65  Time=15.86s (1031s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = 'S', `max_period` = 'S' WHERE (`id` = N)

Count: 23  Time=11.52s (265s)  Lock=0.00s (0s)  Rows=231.0 (5313), superfeeder[superfeeder]@2hosts
  #

Count: 132  Time=10.53s (1390s)  Lock=0.02s (2s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 62  Time=9.81s (608s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.184]
  ROLLBACK

Count: 151  Time=8.94s (1350s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@2hosts
  DELETE FROM `entries` WHERE (`time` < 'S')

Count: 25  Time=8.76s (219s)  Lock=0.00s (0s)  Rows=1.0 (24), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `feeds` WHERE (`url` = 'S') LIMIT N

Count: 2  Time=8.50s (17s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  set SQL_AUTO_IS_NULL=N

Count: 8802  Time=8.44s (74319s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `entries` (`chunks`, `time`, `feed_id`, `unique_id`, `link`, `chunk`) VALUES ('S', 'S', N, 'S', 'S', 'S')

Count: 1  Time=8.00s (8s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `subscriptions` (`user_id`, `feed_id`) VALUES (N, N)

Count: 38  Time=7.92s (301s)  Lock=0.00s (0s)  Rows=1.0 (38), superfeeder[superfeeder]@[172.21.1.184]
  SELECT count(DISTINCT `users`.id) AS count_users_id FROM `users`  INNER JOIN `subscriptions` ON `users`.id = `subscriptions`.user_id    WHERE ((`subscriptions`.feed_id = N))

Count: 9  Time=7.67s (69s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  INSERT IGNORE INTO `feeds` (`url`) VALUES ('S')

Count: 244  Time=7.20s (1756s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = 'S', `max_period` = 'S' WHERE (`id` = N)

Count: 336  Time=6.85s (2301s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = 'S', `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 16  Time=6.38s (102s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = N, `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = N, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 122  Time=5.91s (721s)  Lock=0.00s (0s)  Rows=1.0 (119), superfeeder[superfeeder]@[172.21.1.158]
  SELECT DISTINCT `users`.* FROM `users` INNER JOIN `subscriptions` ON (`subscriptions`.`user_id` = `users`.`id`) WHERE (`subscriptions`.`feed_id` = N)

Count: 299  Time=5.78s (1727s)  Lock=0.00s (0s)  Rows=1.0 (299), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `feeds` WHERE (`id` = 'S')

Count: 21  Time=5.48s (115s)  Lock=0.00s (0s)  Rows=1.0 (21), superfeeder[superfeeder]@[172.21.1.158]
  SELECT * FROM `subscriptions` WHERE ((`user_id` = N) AND (`feed_id` = N)) LIMIT N

Count: 27  Time=5.37s (145s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = 'S', `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 9  Time=4.33s (39s)  Lock=0.00s (0s)  Rows=0.0 (0), superfeeder[superfeeder]@[172.21.1.158]
  UPDATE `feeds` SET `last_error_message` = 'S', `period` = 'S', `last_sup_update_id` = NULL, `updated_at` = NULL, `modified` = 'S', `fetch_session_id` = 'S', `streamed` = 'S', `last_parse` = 'S', `etag` = 'S', `last_entry_time` = 'S', `min_period` = 'S', `url` = 'S', `id` = 'S', `feed_type` = NULL, `sup_id` = NULL, `sup_url_id` = NULL, `next_fetch` = 'S', `hashed_content` = 'S', `last_maintenance_at` = 'S', `last_ping` = NULL, `last_http_code` = NULL, `active` = 'S', `last_fetch` = 'S', `created_at` = NULL, `max_period` = 'S' WHERE (`id` = N)

Count: 1  Time=4.00s (4s)  Lock=0.00s (0s)  Rows=1.0 (1), superfeeder[superfeeder]@[172.21.1.175]
  select id from users where login='S'

Count: 1  Time=3.00s (3s)  Lock=0.00s (0s)  Rows=22.0 (22), debian-sys-maint[debian-sys-maint]@localhost
  select concat("S",
  TABLE_SCHEMA, "S", TABLE_NAME, "S") 
  from information_schema.TABLES where ENGINE="S"

Count: 1056  Time=0.11s (111s)  Lock=0.00s (0s)  Rows=126.9 (133998), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `feeds` WHERE (last_maintenance_at < 'S')

Count: 1049  Time=0.00s (1s)  Lock=0.00s (0s)  Rows=3.1 (3303), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `users` WHERE (one_week_anniversary_sent = N AND activated_at < 'S')

Count: 21  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), 0users@0hosts
  administrator command: Ping

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`COLUMNS`

Count: 8  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=30.0 (240), superfeeder[superfeeder]@[172.21.1.184]
  SELECT DISTINCT `feeds`.* FROM `feeds` INNER JOIN `subscriptions` ON `feeds`.id = `subscriptions`.feed_id WHERE ((`subscriptions`.user_id = N)) AND ((`subscriptions`.user_id = N))  LIMIT N, N

Count: 31  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (31), superfeeder[superfeeder]@2hosts
  SELECT count(*) AS count_all FROM `feeds`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`TRIGGERS`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`VIEWS`

Count: 52  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.7 (34), superfeeder[superfeeder]@[172.21.1.184]
  SELECT * FROM `users` WHERE (`users`.`remember_token` = 'S')  LIMIT N

Count: 120  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (120), superfeeder[superfeeder]@2hosts
  SELECT * FROM `feeds`  ORDER BY feeds.id DESC LIMIT N

Count: 19  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=15.7 (299), superfeeder[superfeeder]@2hosts
  SELECT count(*) AS count_all, last_http_code AS last_http_code FROM `feeds`  GROUP BY last_http_code

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), debian-sys-maint[debian-sys-maint]@localhost
  select count(*) into @discard from `information_schema`.`ROUTINES`

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (1), debian-sys-maint[debian-sys-maint]@localhost
  SELECT count(*) FROM mysql.user WHERE user='S' and password='S'

Définition de table pour les flux:

+---------------------+--------------+------+-----+---------------------+----------------+
| Field               | Type         | Null | Key | Default             | Extra          |
+---------------------+--------------+------+-----+---------------------+----------------+
| id                  | int(11)      | NO   | PRI | NULL                | auto_increment | 
| url                 | varchar(255) | YES  | UNI | NULL                |                | 
| last_parse          | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| etag                | varchar(255) | YES  |     | etag                |                | 
| modified            | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| active              | tinyint(1)   | YES  | MUL | 1                   |                | 
| last_fetch          | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| next_fetch          | datetime     | YES  | MUL | 2009-08-10 14:51:46 |                | 
| fetch_session_id    | varchar(255) | YES  | MUL |                     |                | 
| period              | int(11)      | YES  |     | 240                 |                | 
| hashed_content      | varchar(255) | YES  |     |                     |                | 
| streamed            | tinyint(1)   | YES  |     | 0                   |                | 
| sup_id              | varchar(255) | YES  | MUL | NULL                |                | 
| last_sup_update_id  | varchar(255) | YES  |     | NULL                |                | 
| last_entry_time     | datetime     | YES  |     | 2009-08-10 14:51:46 |                | 
| last_ping           | datetime     | YES  |     | NULL                |                | 
| last_http_code      | int(11)      | YES  |     | NULL                |                | 
| last_error_message  | varchar(255) | YES  |     |                     |                | 
| sup_url_id          | int(11)      | YES  | MUL | NULL                |                | 
| created_at          | datetime     | YES  |     | NULL                |                | 
| updated_at          | datetime     | YES  |     | NULL                |                | 
| last_maintenance_at | datetime     | YES  |     | 2008-08-10 21:51:50 |                | 
| min_period          | int(11)      | YES  |     | 60                  |                | 
| max_period          | int(11)      | YES  |     | 900                 |                | 
+---------------------+--------------+------+-----+---------------------+----------------+

+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| Table | Non_unique | Key_name                             | Seq_in_index | Column_name      | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
| feeds |          0 | PRIMARY                              |            1 | id               | A         |         166 |     NULL | NULL   |      | BTREE      |         | 
| feeds |          0 | index_feeds_on_url                   |            1 | url              | A         |         166 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_next_fetch_and_active |            1 | next_fetch       | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_next_fetch_and_active |            2 | active           | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_sup_id                |            1 | sup_id           | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_sup_url_id            |            1 | sup_url_id       | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
| feeds |          1 | index_feeds_on_fetch_session_id      |            1 | fetch_session_id | A         |           1 |     NULL | NULL   | YES  | BTREE      |         | 
+-------+------------+--------------------------------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+
Julien Genestoux
la source
Que dit le journal des erreurs?
shantanuo

Réponses:

12

Vous ne devriez probablement pas considérer MyISAM, INNODB fonctionnera pour vous. MyISAM est peut-être plus rapide en ce qui concerne SELECTmais (par exemple) il verrouille votre tableau complet sur les mises à jour.

Quant à INNODB:

  • généralement, considérez toujours plus de RAM avant de passer au partage (taille de la DB = ~ RAM)
  • jetez un oeil aux variables suivantes:
    • innodb_buffer_pool_size (nous utilisons environ 60 à 70% de notre mémoire)
    • innodb_log_file_size
    • innodb_log_buffer_size
    • innodb_flush_log_at_trx_commit
    • innodb_thread_concurrency
    • innodb_flush_method=O_DIRECT
    • innodb_file_per_table
  • passer d'innodb à xtradb (même API)
  • utiliser les builds percona (ils contiennent des correctifs de performance de Google, etc.)

Excellentes lectures:

A côté:

  • une tranche de 2 Go n'est tout simplement pas suffisante pour exécuter cette
  • de plus j'ai trouvé le stockage sur slicehost assez lent (io est un facteur)
  • dans le cloud, il peut être judicieux de tailler plus tôt (cause de la limite de RAM)
  • J'exécuterais toutes les requêtes EXPLAINpour m'assurer que l'index est vraiment utilisé
Jusqu'à
la source
Excellente réponse! En ce moment, la taille de notre base de données est de 61M ... donc si je crois que votre déclaration DB = ~ RAM; nous avons assez de RAM! Je pense que le stockage est en effet lent (car il est virtualisé je suppose). Je vais lire tes liens. Merci!
Julien Genestoux
(et oui, j'ai exécuté toutes les requêtes via EXPLAIN et ils utilisent tous des indices)
Julien Genestoux
Salut Julien - tout d'abord, vous votez les réponses ici. ;-) Deuxièmement, le simple fait d'avoir de la RAM est inutile si MySQL ne l'utilise pas. Assurez-vous de revérifier les paramètres afin qu'il les utilise.
jusqu'au
Ouais, eh bien c'est exactement là où j'ai besoin de mon aide;) Je pense que MySQL l'utilise. (et en fait trop!, car notre serveur semble manquer de RAM après un certain point). La base de données fait 61M, comment se fait-il qu'elle "mange" 2 Go de RAM?
Julien Genestoux
Pouvez-vous ajouter la définition de table pour votre "pire" requête?
Jusqu'à
2

J'ai trouvé que MySQLTuner était très efficace dans le passé - il peut faire des suggestions judicieuses pour des changements de configuration basés sur l'utilisation de votre serveur. Il est basé sur le script tuning-primer , qui vaut également la peine d'être essayé.

Dan
la source
1

Votre application libère-t-elle les connexions au pool lorsqu'elle a terminé son travail?


la source
Non, parce que ça n'en a jamais fini avec eux. Ce n'est pas comme une webapp où l'application "meurt" entre les connexions.
Julien Genestoux
0

Si vous pouvez remplacer les clauses where par des ID spécifiques (en supposant qu'il n'y en a pas beaucoup), cela pourrait accélérer ces mises à jour.

L'utilisation de quelque chose comme la recherche de sphinx que vous pouvez facilement regrouper pour trouver des éléments à l'avance vous aidera. J'ai entendu dire qu'il pouvait retourner des recherches plus rapidement que mysql ne savait qu'il avait des index.

http://www.sphinxsearch.com/

Mark L
la source
Désolé, mais je ne vois pas le point? Nous ne faisons aucune recherche sur notre base de données ...
Julien Genestoux
Vous êtes: .. OERE ( feeds. urlIN (NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL)) .. .. OERE ( feeds. urlIN ('S', 'S'))) .. Ceux pourrait être plus cher que de trouver les identifiants entiers via sphinx en premier et en utilisant sphinx, vous pouvez facilement transférer la charge vers un autre serveur en vous souciant des configurations maître-esclave et des retards dans le journal de bin.
Mark L
feeds.url est un index. La clé primaire est-elle plus rapide qu'un index?
Julien Genestoux
Si c'était un entier au lieu de (je suppose) un VARCHAR (10) ou quelque chose comme ça, alors oui, l'entier utiliserait moins d'espace et mysql pourrait le trouver plus rapidement dans la table d'index.
Mark L
Mais cela ne serait-il pas vraiment marginal?
Julien Genestoux
0

Vérifiez à nouveau les requêtes qui prennent autant de temps. Les tables sont-elles correctement indexées?

Vous pouvez également exécuter MySQL Tuner pour affiner vos paramètres mysql.

hdanniel
la source
Ouais, ça arrive quand le serveur bloque, et encore une fois, j'ai vérifié toutes les requêtes: elles utilisent toutes des index. (+ les requêtes les plus lentes sont aussi INSÉRER des requêtes ... ce qui n'a rien à voir avec les indices, AFAIK)
Julien Genestoux