Spark使用OSS,Select加速数据查询
本文介绍如何配置Spark使用OSS Select加速数据查询,以及使用OSS Select查询数据的优势。
本文所有操作基于Apache Impala(CDH6) 处理OSS数据搭建的CDH6集群及配置。
说明文中所有${}的内容为环境变量,请根据您实际的环境修改。
由于Spark默认没有将OSS的支持包放到它的CLASSPATH里面,所以我们需要配置Spark支持读写OSS。您需要在所有的CDH节点执行以下操作:
进入到${CDH_HOME}/lib/spark目录,执行如下命令:
[root@cdh-master spark]# cd jars/[root@cdh-master jars]# ln -s ../../../jars/hadoop-aliyun-3.0.0-cdh6.0.1.jar hadoop-aliyun.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-sdk-oss-2.8.3.jar aliyun-sdk-oss-2.8.3.jar[root@cdh-master jars]# ln -s ../../../jars/jdom-1.1.jar jdom-1.1.jar
进入到${CDH_HOME}/lib/spark目录,运行一个查询。
[root@cdh-master spark]# ./bin/spark-shellWARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-6.0.1-1.cdh6.0.1.p0.590678/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).WARNING: Running spark-class from user-defined location.Setting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).Spark context Web UI available at http://x.x.x.x:4040Spark context available as 'sc' (master = yarn, app id = application_1540878848110_0004).Spark session available as'spark'.Welcome to ____ __ / __/__ ___ _____/ /__ _ / _ / _ `/ __/ '_/ /___/ .__/_,_/_/ /_/_ version 2.2.0-cdh6.0.1 /_/Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)Type in expressions to have them evaluated.Type :help for more information.scala> val myfile = sc.textFile("oss://{your-bucket-name}/50/store_sales")myfile: org.apache.spark.rdd.RDD[String] = oss://{your-bucket-name}/50/store_sales MapPartitionsRDD[1] at textFile at :24scala> myfile.count()res0: Long = 144004764scala> myfile.map(line => line.split('|')).filter(_(0).toInt >= 2451262).take(3)res15: Array[Array[String]] = Array(Array(2451262, 71079, 20359, 154660, 284233, 6206, 150579, 46, 512, 2160001, 84, 6.94, 11.38, 9.33, 681.83, 783.72, 582.96, 955.92, 5.09, 681.83, 101.89, 106.98, -481.07), Array(2451262, 71079, 26863, 154660, 284233, 6206, 150579, 46, 345, 2160001, 12, 67.82, 115.29, 25.36, 0.00, 304.32, 813.84, 1383.48, 21.30, 0.00, 304.32, 325.62, -509.52), Array(2451262, 71079, 55852, 154660, 284233, 6206, 150579, 46, 243, 2160001, 74, 32.41, 34.67, 1.38, 0.00, 102.12, 2398.34, 2565.58, 4.08, 0.00, 102.12, 106.20, -2296.22))scala> myfile.map(line => line.split('|')).filter(_(0) >= "2451262").saveAsTextFile("oss://{your-bucket-name}/spark-oss-test.1") 可正常执行查询,则部署生效。
OSS Select详情请参见OSS Select,以下内容基于oss-cn-shenzhen.aliyuncs.com这个OSS EndPoint来进行。您需要在所有的CDH节点执行以下操作:
下载OSS Select的Spark支持包到${CDH_HOME}/jars目录下(目前该支持包还在测试中)。
解压支持包。
[root@cdh-master jars]# tar -tvf spark-2.2.0-oss-select-0.1.0-SNAPSHOT.tar.gz drwxr-xr-x root/root 0 2018-10-30 17:59 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/-rw-r--r-- root/root 26514 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/stax-api-1.0.1.jar-rw-r--r-- root/root 547584 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-sdk-oss-3.3.0.jar-rw-r--r-- root/root 13277 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-java-sdk-sts-3.0.0.jar-rw-r--r-- root/root 116337 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-java-sdk-core-3.4.0.jar-rw-r--r-- root/root 215492 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-java-sdk-ram-3.0.0.jar-rw-r--r-- root/root 67758 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/jettison-1.1.jar-rw-r--r-- root/root 57264 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/json-20170516.jar-rw-r--r-- root/root 890168 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/jaxb-impl-2.2.3-1.jar-rw-r--r-- root/root 458739 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/jersey-core-1.9.jar-rw-r--r-- root/root 147952 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/jersey-json-1.9.jar-rw-r--r-- root/root 788137 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-java-sdk-ecs-4.2.0.jar-rw-r--r-- root/root 153115 2018-10-30 16:11 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/jdom-1.1.jar-rw-r--r-- root/root 65437 2018-10-31 14:41 spark-2.2.0-oss-select-0.1.0-SNAPSHOT/aliyun-oss-select-spark_2.11-0.1.0-SNAPSHOT.jar
进入${CDH_HOME}/lib/spark/jars目录,执行如下命令:
[root@cdh-master jars]# pwd/opt/cloudera/parcels/CDH/lib/spark/jars[root@cdh-master jars]# rm -f aliyun-sdk-oss-2.8.3.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-oss-select-spark_2.11-0.1.0-SNAPSHOT.jar aliyun-oss-select-spark_2.11-0.1.0-SNAPSHOT.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-java-sdk-core-3.4.0.jar aliyun-java-sdk-core-3.4.0.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-java-sdk-ecs-4.2.0.jar aliyun-java-sdk-ecs-4.2.0.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-java-sdk-ram-3.0.0.jar aliyun-java-sdk-ram-3.0.0.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-java-sdk-sts-3.0.0.jar aliyun-java-sdk-sts-3.0.0.jar[root@cdh-master jars]# ln -s ../../../jars/aliyun-sdk-oss-3.3.0.jar aliyun-sdk-oss-3.3.0.jar[root@cdh-master jars]# ln -s ../../../jars/jdom-1.1.jar jdom-1.1.jar
测试环境:使用spark on yarn进行对比测试,其中Node Manager节点是4个,每个节点最多可以运行4个container,每个container配备的资源是1核2GB内存。
测试数据:共630MB,包含3列,分别是姓名、公司和年龄。
ot@cdh-master jars]# hadoop fs -ls oss://select-test-sz/people/Found 10 items-rw-rw-rw- 1 63079930 2018-10-30 17:03 oss://select-test-sz/people/part-00000-rw-rw-rw- 1 63079930 2018-10-30 17:03 oss://select-test-sz/people/part-00001-rw-rw-rw- 1 63079930 2018-10-30 17:05 oss://select-test-sz/people/part-00002-rw-rw-rw- 1 63079930 2018-10-30 17:05 oss://select-test-sz/people/part-00003-rw-rw-rw- 1 63079930 2018-10-30 17:06 oss://select-test-sz/people/part-00004-rw-rw-rw- 1 63079930 2018-10-30 17:12 oss://select-test-sz/people/part-00005-rw-rw-rw- 1 63079930 2018-10-30 17:14 oss://select-test-sz/people/part-00006-rw-rw-rw- 1 63079930 2018-10-30 17:14 oss://select-test-sz/people/part-00007-rw-rw-rw- 1 63079930 2018-10-30 17:15 oss://select-test-sz/people/part-00008-rw-rw-rw- 1 63079930 2018-10-30 17:16 oss://select-test-sz/people/part-00009
进入到${CDH_HOME}/lib/spark/,启动spark-shell ,分别测试使用OSS Select查询数据和不使用OSS Select查询数据:
[root@cdh-master spark]# ./bin/spark-shellWARNING: User-defined SPARK_HOME (/opt/cloudera/parcels/CDH-6.0.1-1.cdh6.0.1.p0.590678/lib/spark) overrides detected (/opt/cloudera/parcels/CDH/lib/spark).WARNING: Running spark-class from user-defined location.Setting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).Spark context Web UI available at http://x.x.x.x:4040Spark context available as 'sc' (master = yarn, app id = application_1540887123331_0008).Spark session available as 'spark'.Welcome to ____ __ / __/__ ___ _____/ /__ _ / _ / _ `/ __/ '_/ /___/ .__/_,_/_/ /_/_ version 2.2.0-cdh6.0.1 /_/Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152)Type in expressions to have them evaluated.Type :help for more information.scala> val sqlContext = spark.sqlContextsqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@4bdef487scala> sqlContext.sql("CREATE TEMPORARY VIEW people USING com.aliyun.oss " + | "OPTIONS (" + | "oss.bucket 'select-test-sz', " + | "oss.prefix 'people', " + // objects with this prefix belong to this table | "oss.schema 'name string, company string, age long'," + // like 'column_a long, column_b string' | "oss.data.format 'csv'," + // we only support csv now | "oss.input.csv.header 'None'," + | "oss.input.csv.recordDelimiter 'rn'," + | "oss.input.csv.fieldDelimiter ','," + | "oss.input.csv.commentChar '#'," + | "oss.input.csv.quoteChar '"'," + | "oss.output.csv.recordDelimiter 'n'," + | "oss.output.csv.fieldDelimiter ','," + | "oss.output.csv.quoteChar '"'," + | "oss.endpoint 'oss-cn-shenzhen.aliyuncs.com', " + | "oss.accessKeyId 'Your Access Key Id', " + | "oss.accessKeySecret 'Your Access Key Secret')")res0: org.apache.spark.sql.DataFrame = []scala> val sql: String = "select count(*) from people where name like 'Lora%'"sql: String = select count(*) from people where name like 'Lora%'scala> sqlContext.sql(sql).show()+--------+|count(1)|+--------+| 31770|+--------+scala> val textFile = sc.textFile("oss://select-test-sz/people/")textFile: org.apache.spark.rdd.RDD[String] = oss://select-test-sz/people/ MapPartitionsRDD[8] at textFile at :24scala> textFile.map(line => line.split(',')).filter(_(0).startsWith("Lora")).count()res3: Long = 31770 从下图可看到:使用OSS Select查询数据耗时为15s,不使用OSS Select查询数据耗时为54s,使用OSS Select能大幅度加快查询速度。
通过扩展Spark的DataSource API可以实现Spark对接OSS Select。通过实现PrunedFilteredScan,可以把需要的列和过滤条件下推到OSS Select执行。目前这个支持包还在开发中,定义的规范和支持的过滤条件如下:
定义的规范:
scala> sqlContext.sql("CREATE TEMPORARY VIEW people USING com.aliyun.oss " + | "OPTIONS (" + | "oss.bucket 'select-test-sz', " + | "oss.prefix 'people', " + // objects with this prefix belong to this table | "oss.schema 'name string, company string, age long'," + // like 'column_a long, column_b string' | "oss.data.format 'csv'," + // we only support csv now | "oss.input.csv.header 'None'," + | "oss.input.csv.recordDelimiter 'rn'," + | "oss.input.csv.fieldDelimiter ','," + | "oss.input.csv.commentChar '#'," + | "oss.input.csv.quoteChar '"'," + | "oss.output.csv.recordDelimiter 'n'," + | "oss.output.csv.fieldDelimiter ','," + | "oss.output.csv.quoteChar '"'," + | "oss.endpoint 'oss-cn-shenzhen.aliyuncs.com', " + | "oss.accessKeyId 'Your Access Key Id', " + | "oss.accessKeySecret 'Your Access Key Secret')")| 字段 | 说明 |
|---|---|
| oss.bucket | 数据所在的Bucket。 |
| oss.prefix | 拥有这个前缀的Object都属于定义的这个TEMPORARY VIEW。 |
| oss.schema | 这个TEMPORARY VIEW的schema,目前通过字符串指定,后续会通过一个文件来指定schema。 |
| oss.data.format | 数据内容的格式,目前支持CSV格式,其他格式也会陆续支持。 |
| oss.input.csv.* | 定义CSV输入格式参数。 |
| oss.output.csv.* | 定义CSV输出格式参数。 |
| oss.endpoint | Bucket所在的Endpoint。 |
| oss.accessKeyId | 填写AccessKeyId。 |
| oss.accessKeySecret | 填写AccessKeySecret。 |
说明目前只定义了基本参数,详情请参见SelectObject,其余参数将陆续支持。
支持的过滤条件:=,<,>,<=, >=,||,or,not,and,in,like(StringStartsWith,StringEndsWith,StringContains)。对于不能下推的过滤条件,例如算术运算、字符串拼接等通过PrunedFilteredScan获取不到的条件,则只下推需要的列到OSS Select。
说明
OSS Select还支持其他过滤条件,详情请参见SelectObject。
通过测试TPC-H中query1.sql对于lineitem这个table的查询性能,来检验配置效果。为了能使OSS Select过滤更多的数据,我们将where条件由l_shipdate <= ‘1998-09-16’改为where l_shipdate > ‘1997-09-16’,测试数据大小为2.27 GB。仅使用Spark SQL查询和在Spark SQL上使用OSS Select查询的方式如下:
仅使用Spark SQL查询
[root@cdh-master ~]# hadoop fs -ls oss://select-test-sz/data/lineitem.csv-rw-rw-rw- 1 2441079322 2018-10-31 11:18 oss://select-test-sz/data/lineitem.csv
在Spark SQL上使用OSS Select查询
scala> import org.apache.spark.sql.types.{IntegerType, LongType, StringType, StructField, StructType, DoubleType}import org.apache.spark.sql.types.{IntegerType, LongType, StringType, StructField, StructType, DoubleType}scala> import org.apache.spark.sql.{Row, SQLContext}import org.apache.spark.sql.{Row, SQLContext}scala> val sqlContext = spark.sqlContextsqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@74e2cfc5scala> val textFile = sc.textFile("oss://select-test-sz/data/lineitem.csv")textFile: org.apache.spark.rdd.RDD[String] = oss://select-test-sz/data/lineitem.csv MapPartitionsRDD[1] at textFile at :26scala> val dataRdd = textFile.map(_.split('|'))dataRdd: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[2] at map at :28scala> val schema = StructType( | List( | StructField("L_ORDERKEY",LongType,true), | StructField("L_PARTKEY",LongType,true), | StructField("L_SUPPKEY",LongType,true), | StructField("L_LINENUMBER",IntegerType,true), | StructField("L_QUANTITY",DoubleType,true), | StructField("L_EXTENDEDPRICE",DoubleType,true), | StructField("L_DISCOUNT",DoubleType,true), | StructField("L_TAX",DoubleType,true), | StructField("L_RETURNFLAG",StringType,true), | StructField("L_LINESTATUS",StringType,true), | StructField("L_SHIPDATE",StringType,true), | StructField("L_COMMITDATE",StringType,true), | StructField("L_RECEIPTDATE",StringType,true), | StructField("L_SHIPINSTRUCT",StringType,true), | StructField("L_SHIPMODE",StringType,true), | StructField("L_COMMENT",StringType,true) | ) | )schema: org.apache.spark.sql.types.StructType = StructType(StructField(L_ORDERKEY,LongType,true), StructField(L_PARTKEY,LongType,true), StructField(L_SUPPKEY,LongType,true), StructField(L_LINENUMBER,IntegerType,true), StructField(L_QUANTITY,DoubleType,true), StructField(L_EXTENDEDPRICE,DoubleType,true), StructField(L_DISCOUNT,DoubleType,true), StructField(L_TAX,DoubleType,true), StructField(L_RETURNFLAG,StringType,true), StructField(L_LINESTATUS,StringType,true), StructField(L_SHIPDATE,StringType,true), StructField(L_COMMITDATE,StringType,true), StructField(L_RECEIPTDATE,StringType,true), StructField(L_SHIPINSTRUCT,StringType,true), StructField(L_SHIPMODE,StringType,true), StructField(L_COMMENT,StringType,true))scala> val dataRowRdd = dataRdd.map(p => Row(p(0).toLong, p(1).toLong, p(2).toLong, p(3).toInt, p(4).toDouble, p(5).toDouble, p(6).toDouble, p(7).toDouble, p(8), p(9), p(10), p(11), p(12), p(13), p(14), p(15)))dataRowRdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at map at :30scala> val dataFrame = sqlContext.createDataFrame(dataRowRdd, schema)dataFrame: org.apache.spark.sql.DataFrame = [L_ORDERKEY: bigint, L_PARTKEY: bigint ... 14 more fields]scala> dataFrame.createOrReplaceTempView("lineitem")scala> spark.sql("select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from lineitem where l_shipdate > '1997-09-16' group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus").show()+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+------------------+-------------------+-----------+|l_returnflag|l_linestatus| sum_qty| sum_base_price| sum_disc_price| sum_charge| avg_qty| avg_price| avg_disc|count_order|+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+------------------+-------------------+-----------+| N| O|7.5697385E7|1.135107538838699...|1.078345555027154...|1.121504616321447...|25.501957856643052|38241.036487881756|0.04999335309103123| 2968297|+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+------------------+-------------------+-----------+scala> sqlContext.sql("CREATE TEMPORARY VIEW item USING com.aliyun.oss " + | "OPTIONS (" + | "oss.bucket 'select-test-sz', " + | "oss.prefix 'data', " + | "oss.schema 'L_ORDERKEY long, L_PARTKEY long, L_SUPPKEY long, L_LINENUMBER int, L_QUANTITY double, L_EXTENDEDPRICE double, L_DISCOUNT double, L_TAX double, L_RETURNFLAG string, L_LINESTATUS string, L_SHIPDATE string, L_COMMITDATE string, L_RECEIPTDATE string, L_SHIPINSTRUCT string, L_SHIPMODE string, L_COMMENT string'," + | "oss.data.format 'csv'," + // we only support csv now | "oss.input.csv.header 'None'," + | "oss.input.csv.recordDelimiter 'n'," + | "oss.input.csv.fieldDelimiter '|'," + | "oss.input.csv.commentChar '#'," + | "oss.input.csv.quoteChar '"'," + | "oss.output.csv.recordDelimiter 'n'," + | "oss.output.csv.fieldDelimiter ','," + | "oss.output.csv.commentChar '#'," + | "oss.output.csv.quoteChar '"'," + | "oss.endpoint 'oss-cn-shenzhen.aliyuncs.com', " + | "oss.accessKeyId 'Your Access Key Id', " + | "oss.accessKeySecret 'Your Access Key Secret')")res2: org.apache.spark.sql.DataFrame = []scala> sqlContext.sql("select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from item where l_shipdate > '1997-09-16' group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus").show()scala> sqlContext.sql("select l_returnflag, l_linestatus, sum(l_quantity) as sum_qty, sum(l_extendedprice) as sum_base_price, sum(l_extendedprice * (1 - l_discount)) as sum_disc_price, sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge, avg(l_quantity) as avg_qty, avg(l_extendedprice) as avg_price, avg(l_discount) as avg_disc, count(*) as count_order from item where l_shipdate > '1997-09-16' group by l_returnflag, l_linestatus order by l_returnflag, l_linestatus").show()+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+-----------------+-------------------+-----------+|l_returnflag|l_linestatus| sum_qty| sum_base_price| sum_disc_price| sum_charge| avg_qty| avg_price| avg_disc|count_order|+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+-----------------+-------------------+-----------+| N| O|7.5697385E7|1.135107538838701E11|1.078345555027154...|1.121504616321447...|25.501957856643052|38241.03648788181|0.04999335309103024| 2968297|+------------+------------+-----------+--------------------+--------------------+--------------------+------------------+-----------------+-------------------+-----------+ 从下图可以看出:在Spark SQL上使用OSS Select查询数据耗时为38s,在Spark SQL上不使用OSS Select查询数据耗时为2.5min,使用OSS Select可大幅度加快查询速度。
了解更多请加微信:kinnah333
本文地址:http://www.55jiaoyu.com/show-609489.html
本文由合作方发布,不代表展全思梦立场,转载联系作者并注明出处:展全思梦
推荐文档
- 11.往年大连中考满分是多少
- 12.为什么说学播音毁一生,原因有哪些
- 13.淘宝店铺的优质好评语大全
- 14.考研可改变第一学历吗、专科考研可以改变第一学历吗
- 15.民学网查出的学历国家承认吗(民学网查出的学历国家承认吗是真的吗)
- 16.往年轻薄商务笔记本电脑推荐-商务轻薄本性价比排行
- 17.承德护理职业学院(承德护理职业学院2023年招生计划)
- 18.wreak是什么意思wreak的翻译(wake,area是什么意思中文翻译)
- 19.电子科技大学A+类学科名单有哪些(含A、B、C类学科名单)
- 20.systematic是什么意思systematic的翻译(systematically是什么意思中文翻译)
- 21.leant是什么意思leant的翻译(lean,on什么意思中文意思)
- 22.华南农业大学是几本大学,华南农业大学是一本还是二本
- 23.包头中考考试科目时间预测安排,包头中考考哪几门考哪些课程
- 24.高考430分能上什么大学,430分高考能报啥学校
- 25.朱自清的散文代表作有什么(朱自清的散文代表作有什么散文集有什么散文诗集有什么)
- 26.浙江有几所大学是985和211,全国985和211大学名单汇总
- 27.i5,1155G7和R5,5600U哪款好-对比评测
- 28.荷兰什么叫-荷兰弟为什么叫荷兰弟,出演蜘蛛侠原因曝光
- 29.警察警衔工资改革新政策及新方案【全文】解读
- 30.电大专科(电大专科毕业论文)
- 31.广东省高级技工学校官网
- 32.广州大学专科
- 33.大连陆军学院,原大连陆军学院校址现在什么是什么学校
- 34.亲们,谁给一份南京大学的研究生招生简章?(河海大学
- 35.他日若遂凌云志全诗及出处
- 36.铜绿的化学式是什么有哪些性质
- 37.「佛山市顺德养正西山学校初中部」往年录取分数线
- 38.公办本科(公办本科和民办本科有什么区别)
- 39.外交学院是名牌大学吗
- 40.往年湖南高考成绩排名一分一段表
- 41.全国有8所烟草院校是哪些(这4所大学门槛低)
- 42.私人垄断资本主义基本概念是私人垄断资本主义
- 43.难以启齿,这8部影片可以一看(性教育适合看的影片)
- 44.美国独立战争的性质爆发战争的原因是什么
- 45.往年东莞市高中排名前十最新
- 46.大朗网络教育(大朗教育)
- 47.往年甘肃省高中排名最好的高中
- 48.逻辑思维训练有哪些方法优秀训练方法推荐
- 49.浙江大学教务管理系统
- 50.人类的动物老师有哪些这属于什么学科
- 51.往年山西高考状元榜_山西历届高考理科状元和文科状元
- 52.往年北京舞蹈学院艺术类招生简章招生人数及专业
- 53.航空最好的5个专业就业前景如何
- 54.太原科技大学怎么样及评价好不好太原科技大学口碑如何
- 55.满招损谦受益这句话的意思是什么出自哪
- 56.舍本逐末发生在什么时期含义是什么
- 57.女孩子首选十大专业什么专业适合女生
- 58.国防生是什么意思指的是什么
- 59.河南省三本学院有哪些2018最新三本院校名单
- 60.往年龙岩高中学校排名榜单龙岩十大优秀高中
- 51.庆阳蒲河川有多少公里,蒲河川那个美呀
- 52.一秒让iPhone变安卓,苹果2g运存相当于安卓多少
- 53.山寨机刷机软件,买了一个山寨手机,自己能刷机吗?需要什么软件?
- 54.58老版本下载专区,三星阅读老版本下载专区
- 55.888四连号吉祥豹子号电话卡全国通用,免费领取四连号手机卡
- 56.s205exr,富士s205exr算单反吗
- 57.华为商店下载迷你世界是官服吗(华为应用商店下载的游戏是官服吗)
- 58.空调的价格大全,空调在什么价位?
- 59.小象通信流量卡,聚雍通信流量卡
- 60.开五金店利润怎么样,五金生意怎么样
- 61.为什么蒙牛集团中标,蒙牛集团质疑天友乳业违规参与投标
- 62.溧阳燕南山房子怎么样,南航天目湖校区通上了公交车
- 63.中标后发现投标文件未实质性响应招标文件要求怎么办,响应招标文件是什么意思
- 64.桦甸有什么项目,页岩油重镇吉林桦甸市
- 65.粮库地平维修怎么干,危仓老库维修改造全面实施
- 66.包钢台怎么没有,50年代援建包钢
- 67.绍兴惠利街58号哪里下站,快递小哥迎多重利好
- 68.蒋村镇第一中心幼儿园怎么样,幼儿园开工建设
- 69.6.5l,100k,m耗油多少,10月销量同比暴增
- 70.修水县十小在哪里,30「走遍中华纪实」→修水县·义宁镇

