A table consisting of 4 million rows is indexed as follows.
index 1 to 2000: Company 1 to Company 2000 January 1, 2013 information
index 2001-4000: Company 1 to Company 2000 January 2, 2013 information
index 4001 to 6000: Company 1 to Company 2000 January 3, 2013 information
In this way, the information by company is indexed by date.
The syntax I called in r is to read corporate information on December 26, 2019.
It is expected to bring up 2000 consecutive indexes.
Shouldn't the index speed be fast because it brings up continuous indexes?
2000 rows out of 4 million rows.
Of course, it's not exactly 2,000, but 2,200, but I chose 2,000 to make it easier to explain.
r mysql
It seems very likely that the "indexing" is not the Indexing that MySQL or other SQL refers to.
If you have an environment where raw queries can be executed (e.g. phpmyadmin, MS WorkBench, DBeaver, etc.) run the following two queries, and then screenshot and text copy the results and upload them as a fix. Then I think we can see why this query is so slow.
Query 1
EXPLAIN /* This command tells you that it is an execution plan. */
Immediately after SELECT /* EXPLAIN, add "Query I originally wanted to run." */
code,
name,
ROUND(ChangeOP, 4) AS 12MFWOP,
ROUND(ChangeNP, 4) AS 12MFWNP,
ROUND(ChangeOPM, 4) AS 12MFWOPM,
ROUND(ChangeROE, 4) AS 12MFWROE,
ROUND(ChangeSales, 4) AS 12MFWSales,
year,
month,
day
FROM
Basic12MFW1D
WHERE
year = 2019
AND month = 12
And day = 26;
Query 2
/* This is a mysql command to view the structure of this table. */
SHOW CREATE TABLE Basic12MFW1D;
© 2024 OneMinuteCode. All rights reserved.