Inserting 60 000 rows taking too long

by Dario2
17 replies
I'm trying to insert about 60 000 rows on my database on localhost (windows) it is now taking about an hour and still inserting.
Shouldn't it be faster?
Code:
INSERT INTO ip2nation (ip, country) VALUES
 (687865856, 'za'),
 (689963008, 'eg'),
 (691011584, 'za'),
 (691617792, 'zw'),
 (691621888, 'lr'),
 (691625984, 'ke'),
 (691630080, 'za'),
 (691631104, 'gh'),
 (691632128, 'ng'),
 (691633152, 'zw'),
 (691634176, 'za'),
 (691650560, 'gh'),
...
#inserting #long #mysql #rows #taking
  • Profile picture of the author RobinInTexas
    I would think so, but try breaking it into smaller chunks, like 10,000 at a time.

    If it's not done yet after 4 + hours, something's hung up.
    Signature

    Robin



    ...Even if you're on the right track, you'll get run over if you just set there.
    {{ DiscussionBoard.errors[9033433].message }}
  • Profile picture of the author sautaja
    There are a few factors that affect the record insertions. I don't know much about other databases, but for PostgresSQL, the hardware matters:

    * SSD vs hdd.
    * RAID level (1,5,6, etc)
    * disk layout (storing WAL on a separate partition seems to bring better performance)
    Signature
    Jomify - Free multi-channel shopping cart. Open your free store now.
    {{ DiscussionBoard.errors[9033601].message }}
  • Profile picture of the author Dario2
    So it is ok for it to take it so long. Because I read that it may take up to few seconds
    {{ DiscussionBoard.errors[9033653].message }}
  • Profile picture of the author Dog Rescuer
    Originally Posted by Dario2 View Post

    I'm trying to insert about 60 000 rows on my database on localhost (windows) it is now taking about an hour and still inserting.
    Shouldn't it be faster?
    Code:
    INSERT INTO ip2nation (ip, country) VALUES
     (687865856, 'za'),
     (689963008, 'eg'),
     (691011584, 'za'),
     (691617792, 'zw'),
     (691621888, 'lr'),
     (691625984, 'ke'),
     (691630080, 'za'),
     (691631104, 'gh'),
     (691632128, 'ng'),
     (691633152, 'zw'),
     (691634176, 'za'),
     (691650560, 'gh'),
    ...
    Especially on localhost it should be faster.
    There should be some logging functions in your database you can turn on to profile the activities.
    {{ DiscussionBoard.errors[9033827].message }}
  • Profile picture of the author kellter
    Try Bigdump by ozerov

    Saved me on a 400mb database and it imported in seconds.
    {{ DiscussionBoard.errors[9034168].message }}
  • Profile picture of the author TechIntuit
    I think that you would need to clear the query cache.

    Check this article out for more details: Clearing Cache for SQL Server Performance Testing

    Basically this is what you would need to use as mentioned in the article:

    USE <YOURDATABASENAME>;
    GO
    CHECKPOINT;
    GO
    DBCC DROPCLEANBUFFERS;
    GO

    Give it a try and let me know how it goes.

    Good luck!
    Signature
    Entrepreneurs & Internet Marketers...

    Looking for a professional team of WordPress plugin developers? Well, you have just found them. Click here to learn more.
    {{ DiscussionBoard.errors[9034425].message }}
    • Profile picture of the author SteveJohnson
      Originally Posted by kellter View Post

      Try Bigdump by ozerov

      Saved me on a 400mb database and it imported in seconds.
      No need.

      Originally Posted by TechIntuit View Post

      I think that you would need to clear the query cache.

      Check this article out for more details: Clearing Cache for SQL Server Performance Testing

      Basically this is what you would need to use as mentioned in the article:

      USE <YOURDATABASENAME>;
      GO
      CHECKPOINT;
      GO
      DBCC DROPCLEANBUFFERS;
      GO

      Give it a try and let me know how it goes.

      Good luck!
      He's using MySQL, not SQL Server. Your suggestions won't work.

      OP, that insert statement should take no more than a few seconds to run.

      I have the same db, just for giggles I dumped the table, truncated it, and ran the insert. 59,667 rows in 2.46 seconds.

      You have some other issue going on. Make sure your table columns are defined correctly, and not using varchar column types.
      Signature

      The 2nd Amendment, 1789 - The Original Homeland Security.

      Gun control means never having to say, "I missed you."

      {{ DiscussionBoard.errors[9034577].message }}
      • Profile picture of the author kellter
        Bigdump IS for MySql but whatever
        {{ DiscussionBoard.errors[9035627].message }}
        • Profile picture of the author develozard
          I would put the data in a flat file (I prefer .csv) and use the following:

          LOAD DATA LOCAL INFILE "C:/temp/my_file.csv"
          INTO TABLE tablename
          FIELDS TERMINATED BY ';'
          LINES TERMINATED BY '\n'
          IGNORE 1 LINES;

          Was way quicker then inserting every item for itself. But I had just 15k items and a quite strong server ;-)
          Signature

          You need someone to translate your texts to German? Just click on the LINK to see my thread in the "Warrior for Hire" section. Or send me a private message.

          {{ DiscussionBoard.errors[9036037].message }}
          • Profile picture of the author atlet
            I also suggest that you use this approach. I insert over 200k of records in 2 seconds (i5, 8gb ram).

            Originally Posted by develozard View Post

            I would put the data in a flat file (I prefer .csv) and use the following:

            LOAD DATA LOCAL INFILE "C:/temp/my_file.csv"
            INTO TABLE tablename
            FIELDS TERMINATED BY ';'
            LINES TERMINATED BY 'n'
            IGNORE 1 LINES;

            Was way quicker then inserting every item for itself. But I had just 15k items and a quite strong server ;-)
            {{ DiscussionBoard.errors[9047845].message }}
        • Profile picture of the author SteveJohnson
          Originally Posted by kellter View Post

          Bigdump IS for MySql but whatever
          Yes, it is, but his query isn't a 'big dump'.

          That just doesn't sound right, does it?

          I didn't mean that it wouldn't work, but that it wasn't needed for his particular issue - it isn't the query size that is the problem.
          Signature

          The 2nd Amendment, 1789 - The Original Homeland Security.

          Gun control means never having to say, "I missed you."

          {{ DiscussionBoard.errors[9037821].message }}
  • Profile picture of the author mariscal19
    Too little information.
    It's taking too much time, that's for sure.
    Check for indexes or triggers on the table that could be slowing down the process.
    The way you are doing it, you have only one transaction involved. It could have an impact if you are already swapping.
    Make it one insert per row, just to check how it goes.

    Monitor your CPU and memory usage when you run the insert.
    Make a dump of the database, move it to a different box, check there.

    Hope this helps.
    {{ DiscussionBoard.errors[9042809].message }}
  • Profile picture of the author wayfarer
    I don't care how many indexes you have, you're only inserting 2 fields, it should only take a few seconds even if there's 60K rows. It sounds like a table deadlock to me. When you run the following query, what does it say?
    Code:
    mysql> show processlist;
    There should be a "time" column that shows how long each query has been running. If you see any other suspect query that has been running longer than your insert has, and you don't need it, find the process id (left hand column), and kill it like so:
    Code:
    mysql> kill 123456;
    I'm really only guessing based on information at hand, but this might work.
    Signature
    I build web things, server things. I help build the startup Veenome. | Remote Programming Jobs
    {{ DiscussionBoard.errors[9042921].message }}
  • Profile picture of the author valvednd
    why would you do that line by line? can you import from a file? it'd be much faster because the server would treat it as a table...
    {{ DiscussionBoard.errors[9047087].message }}
  • Profile picture of the author brettb
    Another vote for BigDump. I don't know how it does it, but it saved my ass when I couldn't figure out how to upload a 140Mb table dump to my GoDaddy MySQL hosting plan.

    I'll also add that Perl is immensely fast at processing text files, which often comes in handy when you're doing big stuff with databases.
    Signature
    ÖŽ FindABlog: Find blogs to comment on, guest posting opportunities and more ÖŽ




    {{ DiscussionBoard.errors[9048422].message }}
  • Profile picture of the author David B
    Check your indexes, sqlloader with a bit of sed/awk preprocessing can be your friend for bulk loading
    {{ DiscussionBoard.errors[9049124].message }}

Trending Topics