[SPARK-24228][SQL] Fix Java lint errors
authorKazuaki Ishizaki <ishizaki@jp.ibm.com>
Mon, 14 May 2018 02:57:10 +0000 (10:57 +0800)
committerhyukjinkwon <gurwls223@apache.org>
Mon, 14 May 2018 02:57:10 +0000 (10:57 +0800)
## What changes were proposed in this pull request?
This PR fixes the following Java lint errors due to importing unimport classes

```
$ dev/lint-java
Using `mvn` from path: /usr/bin/mvn
Checkstyle checks failed at following occurrences:
[ERROR] src/main/java/org/apache/spark/sql/sources/v2/reader/partitioning/Distribution.java:[25] (sizes) LineLength: Line is longer than 100 characters (found 109).
[ERROR] src/main/java/org/apache/spark/sql/sources/v2/reader/streaming/ContinuousReader.java:[38] (sizes) LineLength: Line is longer than 100 characters (found 102).
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java:[21,8] (imports) UnusedImports: Unused import - java.io.ByteArrayInputStream.
[ERROR] src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java:[29,8] (imports) UnusedImports: Unused import - org.apache.spark.unsafe.Platform.
[ERROR] src/test/java/test/org/apache/spark/sql/sources/v2/JavaAdvancedDataSourceV2.java:[110] (sizes) LineLength: Line is longer than 100 characters (found 101).
```

With this PR
```
$ dev/lint-java
Using `mvn` from path: /usr/bin/mvn
Checkstyle checks passed.
```

## How was this patch tested?

Existing UTs. Also manually run checkstyles against these two files.

Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>

Closes #21301 from kiszk/SPARK-24228.

sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedPlainValuesReader.java
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/partitioning/Distribution.java
sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/streaming/ContinuousReader.java
sql/core/src/test/java/test/org/apache/spark/sql/sources/v2/JavaAdvancedDataSourceV2.java

index aacefac..c62dc3d 100644 (file)
@@ -26,7 +26,6 @@ import org.apache.spark.sql.execution.vectorized.WritableColumnVector;
 
 import org.apache.parquet.column.values.ValuesReader;
 import org.apache.parquet.io.api.Binary;
-import org.apache.spark.unsafe.Platform;
 
 /**
  * An implementation of the Parquet PLAIN decoder that supports the vectorized interface.
index d2ee951..5e32ba6 100644 (file)
@@ -22,7 +22,8 @@ import org.apache.spark.sql.sources.v2.reader.InputPartitionReader;
 
 /**
  * An interface to represent data distribution requirement, which specifies how the records should
- * be distributed among the data partitions(one {@link InputPartitionReader} outputs data for one partition).
+ * be distributed among the data partitions (one {@link InputPartitionReader} outputs data for one
+ * partition).
  * Note that this interface has nothing to do with the data ordering inside one
  * partition(the output records of a single {@link InputPartitionReader}).
  *
index 716c5c0..6e960be 100644 (file)
@@ -35,8 +35,8 @@ import java.util.Optional;
 @InterfaceStability.Evolving
 public interface ContinuousReader extends BaseStreamingSource, DataSourceReader {
     /**
-     * Merge partitioned offsets coming from {@link ContinuousInputPartitionReader} instances for each
-     * partition to a single global offset.
+     * Merge partitioned offsets coming from {@link ContinuousInputPartitionReader} instances
+     * for each partition to a single global offset.
      */
     Offset mergeOffsets(PartitionOffset[] offsets);
 
index 714638e..445cb29 100644 (file)
@@ -107,7 +107,8 @@ public class JavaAdvancedDataSourceV2 implements DataSourceV2, ReadSupport {
     }
   }
 
-  static class JavaAdvancedInputPartition implements InputPartition<Row>, InputPartitionReader<Row> {
+  static class JavaAdvancedInputPartition implements InputPartition<Row>,
+      InputPartitionReader<Row> {
     private int start;
     private int end;
     private StructType requiredSchema;