前言
有一个需求,需要测试es单个索引的性能,需要将一个索引灌1亿条数据,比较了3种常用的批量导入方式,选择了文件+shell批量导入
索引的mapping,如下
PUT corpus_details_17
{
"settings": {
"index.blocks.read_only_allow_delete": "false",
"index.max_result_window": "10000000",
"number_of_replicas": "0",
"number_of_shards": "1"
},
"mappings": {
"properties": {
"targetContent": {
"type": "text"
},
"sourceContent": {
"type": "text"
},
"sourceLanguageId": {
"type": "long"
},
"realmCode": {
"type": "long"
},
"createTime": {
"format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis",
"type": "date"
},
"corpusScore": {
"type": "float"
},
"id": {
"type": "long"
},
"targetLanguageId": {
"type": "long"
}
}
}
}
方式1:restAPI+shell
通过restAPI导入数据在数据量非常小的情况下也可以使用,一次性导入一亿条数据,这个要很长时间,非常慢,不推荐
方式2:java客户端批量导入
这种方式可以使用多线程,但是这种方式与restAPI无本质区别,无非是快了几倍,但是对于1亿数量级来说也非常慢,不推荐
方式3:批量生成数据文件+shell
官网:https://www.elastic.co/guide/cn/elasticsearch/guide/current/bulk.html#bulk
这篇文章简单介绍了批量导入数据的操作
1、首先生成导入的数据文件,文件是json的格式,最后一定要多一行回车
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":15,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.1","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":16,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.2","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":17,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.3","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":18,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.4","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
{"index":{"_index":"corpus_details_17","_type":"_doc"}}
{"id":19,"sourceContent":"测试数据","sourceLanguageId":1,"targetContent":"It's a cold winter AA.5","targetLanguageId":2,"realmCode":0,"corpusScore":0.842105,"createTime":1672292073000}
_index:索引、_type:类型(es默认_doc),下面是要插入的数据,一个数据文件的大小控制在25M左右。
@Slf4j
public class GenerateFile {
public static void main(String[] args) throws Exception {
final LocalDateTime time = LocalDateTime.of(2022, 12, 29, 13, 34, 33);
int count = 1;
String filePath = "test" + count + ".json";
File file = new File(filePath);
FileOutputStream out = new FileOutputStream(file, false);
for (int i = 0; i <= 100000000; i++) {
CorpusDetailsMapping mapping = new CorpusDetailsMapping();
mapping.setId((long) (i+14));
mapping.setSourceContent("测试数据");
mapping.setSourceLanguageId(1);
mapping.setTargetContent("It's a cold winter AA." + i);
mapping.setTargetLanguageId(2);
mapping.setRealmCode(0);
mapping.setCorpusScore(0.842105f);
mapping.setCreateTime(time);
String json = JSONUtil.toJsonStr(mapping);
json = "{\"index\":{\"_index\":\"corpus_details_17\",\"_type\":\"_doc\"}}\n" + json + "\n";
out.write(json.getBytes(StandardCharsets.UTF_8));
// 换新文件写入
if (i % 100000 == 0) {
if (out != null) {
out.close();
}
count++;
log.info("已写入文件:"+filePath);
filePath = "test" + count + ".json";
file = new File(filePath);
out = new FileOutputStream(file, false);
}
}
out.close();
}
}
2、可以通过curl -u name:'pwd' -XPUT "172.16.0.65:7201/_bulk" -H "Content-Type:application/json" --data-binary @test1.json
执行批量导入
3、我们通过java脚本生成了1001个文件,以shell脚本同时执行多个文件文章来源:https://www.toymoban.com/news/detail-406800.html
int=0
while(($int<1001))
do
let "int++"
echo test"$int".json
curl -u name:'pwd' -XPUT "172.16.0.65:7201/_bulk" -H "Content-Type:application/json" --data-binary @test"$int".json
done
4、结论,单线程半个小时生成一个亿的数据文件,一个小时左右完成了1亿数据的导入文章来源地址https://www.toymoban.com/news/detail-406800.html
到了这里,关于elasticsearch通过文件批量导入数据的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!