i want build large log server application , on 100 user put log data @ there automatically every 1 hour. decide use mysql , innodb . have question:
after 5 year 100 active user in each request put 100 byte data db @ every hour of day , i'll have table 5 x 365 x 24 x 100 = 4380000 row !!! , it's size 5 x 365 x 24 x 100 x 100 = 417mb !!!
what searching speed? think it'll going low! can mysql handel ? how take time if want find data of 1 users?
is there faster way?
i've done tests on quite large innodb table (10go, 25 columns, 12 indexes, 8 foreign keys) have on system. counting whole table quite slow, returning indexed value cost free. anyway if don't need have foreign keys checks, myisam faster innodb
mariadb [db]> select count(1) {table}; +----------+ | count(1) | +----------+ | 18865472 | +----------+ 1 row in set (29.80 sec) mariadb [db]> select * {table} {indexed_field} = 108304177; 1 row in set (0.00 sec)
answering comments. assuming logs table that.
create table if not exists logs ( id int(10), creation_date timestamp(), value int(3) );
you scheduled creation of aggregate table, way faster request graph purpose
create table if not exists logs_aggregated ( creation_date timestamp(), nb_logs int(3), value_average int(3) ); insert logs_aggregated values select left(creation_date, 10) creation_date, count(1) nb_logs, avg(value) value_average logs group left(creation_date, 10);
Comments
Post a Comment