To import a large dump (~300GB) you could do:

Split your file first:

csplit -s -ftable dump.sql "/-- Table structure for table/" {999}

You could also use the pattern {*} but you need the gcsplit:

pkg install coreutils

BSD csplit don't support {*}

Then try:

gcsplit -s -ftable dump.sql "/-- Table structure for table/" {*}

Create small INSERT chunks:

gsplit -a 3 -d -n l/200 table05 x_

l/N split into N files without splitting lines/records

Set bulk_insert_buffer_size to 1GB:

mysql> SET bulk_insert_buffer_size = 1024 * 1024 * 1024;

Or in the my.cnf:

[mysqld]
bulk_insert_buffer_size = 1G

needs a restart

Set network buffer and maximum allowed packet size length to a large byte number.

mysql> SET GLOBAL net_buffer_length=1000000;
mysql> SET GLOBAL max_allowed_packet=1000000000;
mysql> SET foreign_key_checks = 0;
mysql> SET unique_checks = 0;
mysql> SET autocommit = 0;

Import the file:

mysql> source file.sql

Ugly way to prevent mysql consuming all memory:

for t in x_*
(echo $t; mysql db_name < $t && service mysql-server restart && sleep 2)

Set back the checks:

mysql> SET foreign_key_checks = 1;
mysql> SET unique_checks = 1;
mysql> SET autocommit = 1;