spark.git
17 months agoPreparing Spark release v2.1.1-rc4 v2.1.1
Patrick Wendell [Tue, 25 Apr 2017 23:28:22 +0000 (16:28 -0700)] 
Preparing Spark release v2.1.1-rc4

17 months ago[SPARK-20239][CORE][2.1-BACKPORT] Improve HistoryServer's ACL mechanism
jerryshao [Tue, 25 Apr 2017 22:21:12 +0000 (15:21 -0700)] 
[SPARK-20239][CORE][2.1-BACKPORT] Improve HistoryServer's ACL mechanism

Current SHS (Spark History Server) has two different ACLs:

* ACL of base URL, it is controlled by "spark.acls.enabled" or "spark.ui.acls.enabled", and with this enabled, only user configured with "spark.admin.acls" (or group) or "spark.ui.view.acls" (or group), or the user who started SHS could list all the applications, otherwise none of them can be listed. This will also affect REST APIs which listing the summary of all apps and one app.
* Per application ACL. This is controlled by "spark.history.ui.acls.enabled". With this enabled only history admin user and user/group who ran this app can access the details of this app.

With this two ACLs, we may encounter several unexpected behaviors:

1. if base URL's ACL (`spark.acls.enable`) is enabled but user A has no view permission. User "A" cannot see the app list but could still access details of it's own app.
2. if ACLs of base URL (`spark.acls.enable`) is disabled, then user "A" could download any application's event log, even it is not run by user "A".
3. The changes of Live UI's ACL will affect History UI's ACL which share the same conf file.

The unexpected behaviors is mainly because we have two different ACLs, ideally we should have only one to manage all.

So to improve SHS's ACL mechanism, here in this PR proposed to:

1. Disable "spark.acls.enable" and only use "spark.history.ui.acls.enable" for history server.
2. Check permission for event-log download REST API.

With this PR:

1. Admin user could see/download the list of all applications, as well as application details.
2. Normal user could see the list of all applications, but can only download and check the details of applications accessible to him.

New UTs are added, also verified in real cluster.

CC tgravescs vanzin please help to review, this PR changes the semantics you did previously. Thanks a lot.

Author: jerryshao <sshao@hortonworks.com>

Closes #17755 from jerryshao/SPARK-20239-2.1-backport.

17 months ago[SPARK-20404][CORE] Using Option(name) instead of Some(name)
Sergey Zhemzhitsky [Tue, 25 Apr 2017 08:18:36 +0000 (09:18 +0100)] 
[SPARK-20404][CORE] Using Option(name) instead of Some(name)

Using Option(name) instead of Some(name) to prevent runtime failures when using accumulators created like the following
```
sparkContext.accumulator(0, null)
```

Author: Sergey Zhemzhitsky <szhemzhitski@gmail.com>

Closes #17740 from szhem/SPARK-20404-null-acc-names.

(cherry picked from commit 0bc7a90210aad9025c1e1bdc99f8e723c1bf0fbf)
Signed-off-by: Sean Owen <sowen@cloudera.com>
17 months ago[SPARK-20455][DOCS] Fix Broken Docker IT Docs
Armin Braun [Tue, 25 Apr 2017 08:13:50 +0000 (09:13 +0100)] 
[SPARK-20455][DOCS] Fix Broken Docker IT Docs

## What changes were proposed in this pull request?

Just added the Maven `test`goal.

## How was this patch tested?

No test needed, just a trivial documentation fix.

Author: Armin Braun <me@obrown.io>

Closes #17756 from original-brownbear/SPARK-20455.

(cherry picked from commit c8f1219510f469935aa9ff0b1c92cfe20372377c)
Signed-off-by: Sean Owen <sowen@cloudera.com>
17 months ago[SPARK-20451] Filter out nested mapType datatypes from sort order in randomSplit
Sameer Agarwal [Tue, 25 Apr 2017 05:05:20 +0000 (13:05 +0800)] 
[SPARK-20451] Filter out nested mapType datatypes from sort order in randomSplit

## What changes were proposed in this pull request?

In `randomSplit`, It is possible that the underlying dataset doesn't guarantee the ordering of rows in its constituent partitions each time a split is materialized which could result in overlapping
splits.

To prevent this, as part of SPARK-12662, we explicitly sort each input partition to make the ordering deterministic. Given that `MapTypes` cannot be sorted this patch explicitly prunes them out from the sort order. Additionally, if the resulting sort order is empty, this patch then materializes the dataset to guarantee determinism.

## How was this patch tested?

Extended `randomSplit on reordered partitions` in `DataFrameStatSuite` to also test for dataframes with mapTypes nested mapTypes.

Author: Sameer Agarwal <sameerag@cs.berkeley.edu>

Closes #17751 from sameeragarwal/randomsplit2.

(cherry picked from commit 31345fde82ada1f8bb12807b250b04726a1f6aa6)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
17 months ago[SPARK-20450][SQL] Unexpected first-query schema inference cost with 2.1.1
Eric Liang [Mon, 24 Apr 2017 22:33:09 +0000 (00:33 +0200)] 
[SPARK-20450][SQL] Unexpected first-query schema inference cost with 2.1.1

## What changes were proposed in this pull request?

https://issues.apache.org/jira/browse/SPARK-19611 fixes a regression from 2.0 where Spark silently fails to read case-sensitive fields missing a case-sensitive schema in the table properties. The fix is to detect this situation, infer the schema, and write the case-sensitive schema into the metastore.

However this can incur an unexpected performance hit the first time such a problematic table is queried (and there is a high false-positive rate here since most tables don't actually have case-sensitive fields).

This PR changes the default to NEVER_INFER (same behavior as 2.1.0). In 2.2, we can consider leaving the default to INFER_AND_SAVE.

## How was this patch tested?

Unit tests.

Author: Eric Liang <ekl@databricks.com>

Closes #17749 from ericl/spark-20450.

18 months ago[SPARK-20407][TESTS][BACKPORT-2.1] ParquetQuerySuite 'Enabling/disabling ignoreCorrup...
Bogdan Raducanu [Sat, 22 Apr 2017 16:58:07 +0000 (09:58 -0700)] 
[SPARK-20407][TESTS][BACKPORT-2.1] ParquetQuerySuite 'Enabling/disabling ignoreCorruptFiles' flaky test

## What changes were proposed in this pull request?

SharedSQLContext.afterEach now calls DebugFilesystem.assertNoOpenStreams inside eventually.
SQLTestUtils withTempDir calls waitForTasksToFinish before deleting the directory.

## How was this patch tested?
New test but marked as ignored because it takes 30s. Can be unignored for review.

Author: Bogdan Raducanu <bogdan@databricks.com>

Closes #17720 from bogdanrdc/SPARK-20407-BACKPORT2.1.

18 months agoSmall rewording about history server use case
Hervé [Fri, 21 Apr 2017 07:52:18 +0000 (08:52 +0100)] 
Small rewording about history server use case

Hello
PR #10991 removed the built-in history view from Spark Standalone, so the history server is no longer useful to Yarn or Mesos only.

Author: Hervé <dud225@users.noreply.github.com>

Closes #17709 from dud225/patch-1.

(cherry picked from commit 34767997e0c6cb28e1fac8cb650fa3511f260ca5)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20409][SQL] fail early if aggregate function in GROUP BY
Wenchen Fan [Thu, 20 Apr 2017 14:59:38 +0000 (16:59 +0200)] 
[SPARK-20409][SQL] fail early if aggregate function in GROUP BY

## What changes were proposed in this pull request?

It's illegal to have aggregate function in GROUP BY, and we should fail at analysis phase, if this happens.

## How was this patch tested?

new regression test

Author: Wenchen Fan <wenchen@databricks.com>

Closes #17704 from cloud-fan/minor.

18 months ago[MINOR][SS] Fix a missing space in UnsupportedOperationChecker error message
Shixiong Zhu [Thu, 20 Apr 2017 01:58:14 +0000 (18:58 -0700)] 
[MINOR][SS] Fix a missing space in UnsupportedOperationChecker error message

## What changes were proposed in this pull request?

Also went through the same file to ensure other string concatenation are correct.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17691 from zsxwing/fix-error-message.

(cherry picked from commit 39e303a8b6db642c26dbc26ba92e87680f50e4da)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
18 months ago[SPARK-20359][SQL] Avoid unnecessary execution in EliminateOuterJoin optimization...
Koert Kuipers [Wed, 19 Apr 2017 07:52:47 +0000 (15:52 +0800)] 
[SPARK-20359][SQL] Avoid unnecessary execution in EliminateOuterJoin optimization that can lead to NPE

Avoid necessary execution that can lead to NPE in EliminateOuterJoin and add test in DataFrameSuite to confirm NPE is no longer thrown

## What changes were proposed in this pull request?
Change leftHasNonNullPredicate and rightHasNonNullPredicate to lazy so they are only executed when needed.

## How was this patch tested?

Added test in DataFrameSuite that failed before this fix and now succeeds. Note that a test in catalyst project would be better but i am unsure how to do this.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Koert Kuipers <koert@tresata.com>

Closes #17660 from koertkuipers/feat-catch-npe-in-eliminate-outer-join.

(cherry picked from commit 608bf30f0b9759fd0b9b9f33766295550996a9eb)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
18 months ago[SPARK-17647][SQL][FOLLOWUP][MINOR] fix typo
Felix Cheung [Tue, 18 Apr 2017 06:55:40 +0000 (23:55 -0700)] 
[SPARK-17647][SQL][FOLLOWUP][MINOR] fix typo

## What changes were proposed in this pull request?

fix typo

## How was this patch tested?

manual

Author: Felix Cheung <felixcheung_m@hotmail.com>

Closes #17663 from felixcheung/likedoctypo.

(cherry picked from commit b0a1e93e93167b53058525a20a8b06f7df5f09a2)
Signed-off-by: Felix Cheung <felixcheung@apache.org>
18 months ago[SPARK-20349][SQL][REVERT-BRANCH2.1] ListFunctions returns duplicate functions after...
Xiao Li [Tue, 18 Apr 2017 00:57:20 +0000 (17:57 -0700)] 
[SPARK-20349][SQL][REVERT-BRANCH2.1] ListFunctions returns duplicate functions after using persistent functions

Revert the changes of https://github.com/apache/spark/pull/17646 made in Branch 2.1, because it breaks the build. It needs the parser interface, but SessionCatalog in branch 2.1 does not have it.

### What changes were proposed in this pull request?

The session catalog caches some persistent functions in the `FunctionRegistry`, so there can be duplicates. Our Catalog API `listFunctions` does not handle it.

It would be better if `SessionCatalog` API can de-duplciate the records, instead of doing it by each API caller. In `FunctionRegistry`, our functions are identified by the unquoted string. Thus, this PR is try to parse it using our parser interface and then de-duplicate the names.

### How was this patch tested?
Added test cases.

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17661 from gatorsmile/compilationFix17646.

18 months ago[HOTFIX] Fix compilation.
Reynold Xin [Mon, 17 Apr 2017 19:57:58 +0000 (12:57 -0700)] 
[HOTFIX] Fix compilation.

18 months ago[SPARK-17647][SQL] Fix backslash escaping in 'LIKE' patterns.
Jakob Odersky [Mon, 17 Apr 2017 18:17:57 +0000 (11:17 -0700)] 
[SPARK-17647][SQL] Fix backslash escaping in 'LIKE' patterns.

This patch fixes a bug in the way LIKE patterns are translated to Java regexes. The bug causes any character following an escaped backslash to be escaped, i.e. there is double-escaping.
A concrete example is the following pattern:`'%\\%'`. The expected Java regex that this pattern should correspond to (according to the behavior described below) is `'.*\\.*'`, however the current situation leads to `'.*\\%'` instead.

---

Update: in light of the discussion that ensued, we should explicitly define the expected behaviour of LIKE expressions, especially in certain edge cases. With the help of gatorsmile, we put together a list of different RDBMS and their variations wrt to certain standard features.

| RDBMS\Features | Wildcards | Default escape [1] | Case sensitivity |
| --- | --- | --- | --- |
| [MS SQL Server](https://msdn.microsoft.com/en-us/library/ms179859.aspx) | _, %, [], [^] | none | no |
| [Oracle](https://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions016.htm) | _, % | none | yes |
| [DB2 z/OS](http://www.ibm.com/support/knowledgecenter/SSEPEK_11.0.0/sqlref/src/tpc/db2z_likepredicate.html) | _, % | none | yes |
| [MySQL](http://dev.mysql.com/doc/refman/5.7/en/string-comparison-functions.html) | _, % | none | no |
| [PostreSQL](https://www.postgresql.org/docs/9.0/static/functions-matching.html) | _, % | \ | yes |
| [Hive](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF) | _, % | none | yes |
| Current Spark | _, % | \ | yes |

[1] Default escape character: most systems do not have a default escape character, instead the user can specify one by calling a like expression with an escape argument [A] LIKE [B] ESCAPE [C]. This syntax is currently not supported by Spark, however I would volunteer to implement this feature in a separate ticket.

The specifications are often quite terse and certain scenarios are undocumented, so here is a list of scenarios that I am uncertain about and would appreciate any input. Specifically I am looking for feedback on whether or not Spark's current behavior should be changed.
1. [x] Ending a pattern with the escape sequence, e.g. `like 'a\'`.
   PostreSQL gives an error: 'LIKE pattern must not end with escape character', which I personally find logical. Currently, Spark allows "non-terminated" escapes and simply ignores them as part of the pattern.
   According to [DB2's documentation](http://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.messages.sql.doc/doc/msql00130n.html), ending a pattern in an escape character is invalid.
   _Proposed new behaviour in Spark: throw AnalysisException_
2. [x] Empty input, e.g. `'' like ''`
   Postgres and DB2 will match empty input only if the pattern is empty as well, any other combination of empty input will not match. Spark currently follows this rule.
3. [x] Escape before a non-special character, e.g. `'a' like '\a'`.
   Escaping a non-wildcard character is not really documented but PostgreSQL just treats it verbatim, which I also find the least surprising behavior. Spark does the same.
   According to [DB2's documentation](http://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.messages.sql.doc/doc/msql00130n.html), it is invalid to follow an escape character with anything other than an escape character, an underscore or a percent sign.
   _Proposed new behaviour in Spark: throw AnalysisException_

The current specification is also described in the operator's source code in this patch.

Extra case in regex unit tests.

Author: Jakob Odersky <jakob@odersky.com>

This patch had conflicts when merged, resolved by
Committer: Reynold Xin <rxin@databricks.com>

Closes #15398 from jodersky/SPARK-17647.

(cherry picked from commit e5fee3e4f853f906f0b476bb04ee35a15f1ae650)
Signed-off-by: Reynold Xin <rxin@databricks.com>
18 months ago[SPARK-20349][SQL] ListFunctions returns duplicate functions after using persistent...
Xiao Li [Mon, 17 Apr 2017 16:50:20 +0000 (09:50 -0700)] 
[SPARK-20349][SQL] ListFunctions returns duplicate functions after using persistent functions

### What changes were proposed in this pull request?
The session catalog caches some persistent functions in the `FunctionRegistry`, so there can be duplicates. Our Catalog API `listFunctions` does not handle it.

It would be better if `SessionCatalog` API can de-duplciate the records, instead of doing it by each API caller. In `FunctionRegistry`, our functions are identified by the unquoted string. Thus, this PR is try to parse it using our parser interface and then de-duplicate the names.

### How was this patch tested?
Added test cases.

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17646 from gatorsmile/showFunctions.

(cherry picked from commit 01ff0350a85b179715946c3bd4f003db7c5e3641)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20335][SQL][BACKPORT-2.1] Children expressions of Hive UDF impacts the determi...
Xiao Li [Mon, 17 Apr 2017 07:59:55 +0000 (15:59 +0800)] 
[SPARK-20335][SQL][BACKPORT-2.1] Children expressions of Hive UDF impacts the determinism of Hive UDF

### What changes were proposed in this pull request?

This PR is to backport https://github.com/apache/spark/pull/17635 to Spark 2.1

---
```JAVA
  /**
   * Certain optimizations should not be applied if UDF is not deterministic.
   * Deterministic UDF returns same result each time it is invoked with a
   * particular input. This determinism just needs to hold within the context of
   * a query.
   *
   * return true if the UDF is deterministic
   */
  boolean deterministic() default true;
```

Based on the definition of [UDFType](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFType.java#L42-L50), when Hive UDF's children are non-deterministic, Hive UDF is also non-deterministic.

### How was this patch tested?
Added test cases.

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17652 from gatorsmile/backport-17635.

18 months agoPreparing development version 2.1.2-SNAPSHOT
Patrick Wendell [Fri, 14 Apr 2017 22:37:47 +0000 (15:37 -0700)] 
Preparing development version 2.1.2-SNAPSHOT

18 months agoPreparing Spark release v2.1.1-rc3
Patrick Wendell [Fri, 14 Apr 2017 22:37:43 +0000 (15:37 -0700)] 
Preparing Spark release v2.1.1-rc3

18 months ago[SPARK-20243][TESTS] DebugFilesystem.assertNoOpenStreams thread race
Bogdan Raducanu [Mon, 10 Apr 2017 15:34:15 +0000 (17:34 +0200)] 
[SPARK-20243][TESTS] DebugFilesystem.assertNoOpenStreams thread race

## What changes were proposed in this pull request?

Synchronize access to openStreams map.

## How was this patch tested?

Existing tests.

Author: Bogdan Raducanu <bogdan@databricks.com>

Closes #17592 from bogdanrdc/SPARK-20243.

18 months ago[SPARK-19946][TESTS][BACKPORT-2.1] DebugFilesystem.assertNoOpenStreams should report...
Bogdan Raducanu [Thu, 13 Apr 2017 18:21:58 +0000 (20:21 +0200)] 
[SPARK-19946][TESTS][BACKPORT-2.1] DebugFilesystem.assertNoOpenStreams should report the open streams to help debugging

## What changes were proposed in this pull request?
Backport for PR #17292
DebugFilesystem.assertNoOpenStreams throws an exception with a cause exception that actually shows the code line which leaked the stream.

## How was this patch tested?
New test in SparkContextSuite to check there is a cause exception.

Author: Bogdan Raducanu <bogdan@databricks.com>

Closes #17632 from bogdanrdc/SPARK-19946-BRANCH2.1.

18 months ago[SPARK-19924][SQL][BACKPORT-2.1] Handle InvocationTargetException for all Hive Shim
Xiao Li [Thu, 13 Apr 2017 11:18:55 +0000 (19:18 +0800)] 
[SPARK-19924][SQL][BACKPORT-2.1] Handle InvocationTargetException for all Hive Shim

### What changes were proposed in this pull request?

This is to backport the PR https://github.com/apache/spark/pull/17265 to Spark 2.1 branch.

---
Since we are using shim for most Hive metastore APIs, the exceptions thrown by the underlying method of Method.invoke() are wrapped by `InvocationTargetException`. Instead of doing it one by one, we should handle all of them in the `withClient`. If any of them is missing, the error message could looks unfriendly. For example, below is an example for dropping tables.

```
Expected exception org.apache.spark.sql.AnalysisException to be thrown, but java.lang.reflect.InvocationTargetException was thrown.
ScalaTestFailureLocation: org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14 at (ExternalCatalogSuite.scala:193)
org.scalatest.exceptions.TestFailedException: Expected exception org.apache.spark.sql.AnalysisException to be thrown, but java.lang.reflect.InvocationTargetException was thrown.
at org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:496)
at org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1555)
at org.scalatest.Assertions$class.intercept(Assertions.scala:1004)
at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply$mcV$sp(ExternalCatalogSuite.scala:193)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply(ExternalCatalogSuite.scala:183)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply(ExternalCatalogSuite.scala:183)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:68)
at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(ExternalCatalogSuite.scala:40)
at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:255)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite.runTest(ExternalCatalogSuite.scala:40)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
at org.scalatest.Suite$class.run(Suite.scala:1424)
at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:31)
at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:31)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
at scala.collection.immutable.List.foreach(List.scala:381)
at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
at org.scalatest.tools.Runner$.run(Runner.scala:883)
at org.scalatest.tools.Runner.run(Runner.scala)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.sql.hive.client.Shim_v0_14.dropTable(HiveShim.scala:736)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$dropTable$1.apply$mcV$sp(HiveClientImpl.scala:451)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$dropTable$1.apply(HiveClientImpl.scala:451)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$dropTable$1.apply(HiveClientImpl.scala:451)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:287)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:228)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:227)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:270)
at org.apache.spark.sql.hive.client.HiveClientImpl.dropTable(HiveClientImpl.scala:450)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$dropTable$1.apply$mcV$sp(HiveExternalCatalog.scala:456)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$dropTable$1.apply(HiveExternalCatalog.scala:454)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$dropTable$1.apply(HiveExternalCatalog.scala:454)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:94)
at org.apache.spark.sql.hive.HiveExternalCatalog.dropTable(HiveExternalCatalog.scala:454)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14$$anonfun$apply$mcV$sp$8.apply$mcV$sp(ExternalCatalogSuite.scala:194)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14$$anonfun$apply$mcV$sp$8.apply(ExternalCatalogSuite.scala:194)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14$$anonfun$apply$mcV$sp$8.apply(ExternalCatalogSuite.scala:194)
at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
... 57 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: NoSuchObjectException(message:db2.unknown_table table not found)
at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1038)
... 79 more
Caused by: NoSuchObjectException(message:db2.unknown_table table not found)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1808)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1778)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy10.get_table(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1208)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.getTable(SessionHiveMetaStoreClient.java:131)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:952)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:904)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy11.dropTable(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1035)
... 79 more
```

After unwrapping the exception, the message is like
```
org.apache.hadoop.hive.ql.metadata.HiveException: NoSuchObjectException(message:db2.unknown_table table not found);
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: NoSuchObjectException(message:db2.unknown_table table not found);
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:100)
at org.apache.spark.sql.hive.HiveExternalCatalog.dropTable(HiveExternalCatalog.scala:460)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply$mcV$sp(ExternalCatalogSuite.scala:193)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply(ExternalCatalogSuite.scala:183)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogSuite$$anonfun$14.apply(ExternalCatalogSuite.scala:183)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
...
```
### How was this patch tested?
N/A

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17627 from gatorsmile/backport-17265.

18 months ago[SPARK-20131][CORE] Don't use `this` lock in StandaloneSchedulerBackend.stop
Shixiong Zhu [Thu, 13 Apr 2017 00:44:18 +0000 (17:44 -0700)] 
[SPARK-20131][CORE] Don't use `this` lock in StandaloneSchedulerBackend.stop

## What changes were proposed in this pull request?

`o.a.s.streaming.StreamingContextSuite.SPARK-18560 Receiver data should be deserialized properly` is flaky is because there is a potential dead-lock in StandaloneSchedulerBackend which causes `await` timeout. Here is the related stack trace:
```
"Thread-31" #211 daemon prio=5 os_prio=31 tid=0x00007fedd4808000 nid=0x16403 waiting on condition [0x00007000239b7000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000000079b49ca10> (a scala.concurrent.impl.Promise$CompletionLatch)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:92)
at org.apache.spark.rpc.RpcEndpointRef.askSync(RpcEndpointRef.scala:76)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:402)
at org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.org$apache$spark$scheduler$cluster$StandaloneSchedulerBackend$$stop(StandaloneSchedulerBackend.scala:213)
- locked <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
at org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend.stop(StandaloneSchedulerBackend.scala:116)
- locked <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:517)
at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1657)
at org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1921)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1302)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1920)
at org.apache.spark.streaming.StreamingContext.stop(StreamingContext.scala:708)
at org.apache.spark.streaming.StreamingContextSuite$$anonfun$43$$anonfun$apply$mcV$sp$66$$anon$3.run(StreamingContextSuite.scala:827)

"dispatcher-event-loop-3" #18 daemon prio=5 os_prio=31 tid=0x00007fedd603a000 nid=0x6203 waiting for monitor entry [0x0000700003be4000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.org$apache$spark$scheduler$cluster$CoarseGrainedSchedulerBackend$DriverEndpoint$$makeOffers(CoarseGrainedSchedulerBackend.scala:253)
- waiting to lock <0x00000007066fca38> (a org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend)
at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receive$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:124)
at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:117)
at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```

This PR removes `synchronized` and changes `stopping` to AtomicBoolean to ensure idempotent to fix the dead-lock.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17610 from zsxwing/SPARK-20131.

(cherry picked from commit c5f1cc370f0aa1f0151fd34251607a8de861395e)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
18 months ago[SPARK-20304][SQL] AssertNotNull should not include path in string representation
Reynold Xin [Wed, 12 Apr 2017 16:05:05 +0000 (09:05 -0700)] 
[SPARK-20304][SQL] AssertNotNull should not include path in string representation

## What changes were proposed in this pull request?
AssertNotNull's toString/simpleString dumps the entire walkedTypePath. walkedTypePath is used for error message reporting and shouldn't be part of the output.

## How was this patch tested?
Manually tested.

Author: Reynold Xin <rxin@databricks.com>

Closes #17616 from rxin/SPARK-20304.

(cherry picked from commit 540855382c8f139fbf4eb0800b31c7ce91f29c7f)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20296][TRIVIAL][DOCS] Count distinct error message for streaming
jtoka [Wed, 12 Apr 2017 10:36:08 +0000 (11:36 +0100)] 
[SPARK-20296][TRIVIAL][DOCS] Count distinct error message for streaming

## What changes were proposed in this pull request?
Update count distinct error message for streaming datasets/dataframes to match current behavior. These aggregations are not yet supported, regardless of whether the dataset/dataframe is aggregated.

Author: jtoka <jason.tokayer@gmail.com>

Closes #17609 from jtoka/master.

(cherry picked from commit 2e1fd46e12bf948490ece2caa73d227b6a924a14)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[MINOR][DOCS] Fix spacings in Structured Streaming Programming Guide
Lee Dongjin [Wed, 12 Apr 2017 08:12:14 +0000 (09:12 +0100)] 
[MINOR][DOCS] Fix spacings in Structured Streaming Programming Guide

## What changes were proposed in this pull request?

1. Omitted space between the sentences: `... on static data.The Spark SQL engine will ...` -> `... on static data. The Spark SQL engine will ...`
2. Omitted colon in Output Model section.

## How was this patch tested?

None.

Author: Lee Dongjin <dongjin@apache.org>

Closes #17564 from dongjinleekr/feature/fix-programming-guide.

(cherry picked from commit b9384382484a9f5c6b389742e7fdf63865de81c0)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20291][SQL] NaNvl(FloatType, NullType) should not be cast to NaNvl(DoubleType...
DB Tsai [Wed, 12 Apr 2017 03:19:20 +0000 (11:19 +0800)] 
[SPARK-20291][SQL] NaNvl(FloatType, NullType) should not be cast to NaNvl(DoubleType, DoubleType)

## What changes were proposed in this pull request?

`NaNvl(float value, null)` will be converted into `NaNvl(float value, Cast(null, DoubleType))` and finally `NaNvl(Cast(float value, DoubleType), Cast(null, DoubleType))`.

This will cause mismatching in the output type when the input type is float.

By adding extra rule in TypeCoercion can resolve this issue.

## How was this patch tested?

unite tests.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: DB Tsai <dbt@netflix.com>

Closes #17606 from dbtsai/fixNaNvl.

(cherry picked from commit 8ad63ee158815de5ffff7bf03cdf25aef312095f)
Signed-off-by: DB Tsai <dbtsai@dbtsai.com>
18 months ago[SPARK-18555][MINOR][SQL] Fix the @since tag when backporting from 2.2 branch into...
DB Tsai [Tue, 11 Apr 2017 04:05:40 +0000 (04:05 +0000)] 
[SPARK-18555][MINOR][SQL] Fix the @since tag when backporting from 2.2 branch into 2.1 branch

## What changes were proposed in this pull request?

Fix the since tag when backporting critical bugs (SPARK-18555) from 2.2 branch into 2.1 branch.

## How was this patch tested?

N/A

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: DB Tsai <dbtsai@dbtsai.com>

Closes #17600 from dbtsai/branch-2.1.

18 months ago[SPARK-17564][TESTS] Fix flaky RequestTimeoutIntegrationSuite.furtherRequestsDelay
Shixiong Zhu [Tue, 11 Apr 2017 03:41:08 +0000 (20:41 -0700)] 
[SPARK-17564][TESTS] Fix flaky RequestTimeoutIntegrationSuite.furtherRequestsDelay

## What changes were proposed in this pull request?

This PR  fixs the following failure:
```
sbt.ForkMain$ForkError: java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at org.apache.spark.network.RequestTimeoutIntegrationSuite.furtherRequestsDelay(RequestTimeoutIntegrationSuite.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
at com.novocode.junit.JUnitRunner$1.execute(JUnitRunner.java:132)
at sbt.ForkMain$Run$2.call(ForkMain.java:296)
at sbt.ForkMain$Run$2.call(ForkMain.java:286)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
```

It happens several times per month on [Jenkins](http://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.network.RequestTimeoutIntegrationSuite&test_name=furtherRequestsDelay). The failure is because `callback1` may not be called before `assertTrue(callback1.failure instanceof IOException);`. It's pretty easy to reproduce this error by adding a sleep before this line: https://github.com/apache/spark/blob/379b0b0bbdbba2278ce3bcf471bd75f6ffd9cf0d/common/network-common/src/test/java/org/apache/spark/network/RequestTimeoutIntegrationSuite.java#L267

The fix is straightforward: just use the latch to wait until `callback1` is called.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17599 from zsxwing/SPARK-17564.

(cherry picked from commit 734dfbfcfea1ed1ab3a5f18f84c412a569dd87e7)
Signed-off-by: Reynold Xin <rxin@databricks.com>
18 months ago[SPARK-20270][SQL] na.fill should not change the values in long or integer when the...
DB Tsai [Mon, 10 Apr 2017 05:16:34 +0000 (05:16 +0000)] 
[SPARK-20270][SQL] na.fill should not change the values in long or integer when the default value is in double

## What changes were proposed in this pull request?

This bug was partially addressed in SPARK-18555 https://github.com/apache/spark/pull/15994, but the root cause isn't completely solved. This bug is pretty critical since it changes the member id in Long in our application if the member id can not be represented by Double losslessly when the member id is very big.

Here is an example how this happens, with
```
      Seq[(java.lang.Long, java.lang.Double)]((null, 3.14), (9123146099426677101L, null),
        (9123146560113991650L, 1.6), (null, null)).toDF("a", "b").na.fill(0.2),
```
the logical plan will be
```
== Analyzed Logical Plan ==
a: bigint, b: double
Project [cast(coalesce(cast(a#232L as double), cast(0.2 as double)) as bigint) AS a#240L, cast(coalesce(nanvl(b#233, cast(null as double)), 0.2) as double) AS b#241]
+- Project [_1#229L AS a#232L, _2#230 AS b#233]
   +- LocalRelation [_1#229L, _2#230]
```

Note that even the value is not null, Spark will cast the Long into Double first. Then if it's not null, Spark will cast it back to Long which results in losing precision.

The behavior should be that the original value should not be changed if it's not null, but Spark will change the value which is wrong.

With the PR, the logical plan will be
```
== Analyzed Logical Plan ==
a: bigint, b: double
Project [coalesce(a#232L, cast(0.2 as bigint)) AS a#240L, coalesce(nanvl(b#233, cast(null as double)), cast(0.2 as double)) AS b#241]
+- Project [_1#229L AS a#232L, _2#230 AS b#233]
   +- LocalRelation [_1#229L, _2#230]
```
which behaves correctly without changing the original Long values and also avoids extra cost of unnecessary casting.

## How was this patch tested?

unit test added.

+cc srowen rxin cloud-fan gatorsmile

Thanks.

Author: DB Tsai <dbt@netflix.com>

Closes #17577 from dbtsai/fixnafill.

(cherry picked from commit 1a0bc41659eef317dcac18df35c26857216a4314)
Signed-off-by: DB Tsai <dbtsai@dbtsai.com>
18 months ago[SPARK-18555][SQL] DataFrameNaFunctions.fill miss up original values in long integers
root [Tue, 6 Dec 2016 02:39:56 +0000 (18:39 -0800)] 
[SPARK-18555][SQL] DataFrameNaFunctions.fill miss up original values in long integers

## What changes were proposed in this pull request?

   DataSet.na.fill(0) used on a DataSet which has a long value column, it will change the original long value.

   The reason is that the type of the function fill's param is Double, and the numeric columns are always cast to double(`fillCol[Double](f, value)`) .
```
  def fill(value: Double, cols: Seq[String]): DataFrame = {
    val columnEquals = df.sparkSession.sessionState.analyzer.resolver
    val projections = df.schema.fields.map { f =>
      // Only fill if the column is part of the cols list.
      if (f.dataType.isInstanceOf[NumericType] && cols.exists(col => columnEquals(f.name, col))) {
        fillCol[Double](f, value)
      } else {
        df.col(f.name)
      }
    }
    df.select(projections : _*)
  }
```

 For example:
```
scala> val df = Seq[(Long, Long)]((1, 2), (-1, -2), (9123146099426677101L, 9123146560113991650L)).toDF("a", "b")
df: org.apache.spark.sql.DataFrame = [a: bigint, b: bigint]

scala> df.show
+-------------------+-------------------+
|                  a|                  b|
+-------------------+-------------------+
|                  1|                  2|
|                 -1|                 -2|
|9123146099426677101|9123146560113991650|
+-------------------+-------------------+

scala> df.na.fill(0).show
+-------------------+-------------------+
|                  a|                  b|
+-------------------+-------------------+
|                  1|                  2|
|                 -1|                 -2|
|9123146099426676736|9123146560113991680|
+-------------------+-------------------+
 ```

the original values changed [which is not we expected result]:
```
 9123146099426677101 -> 9123146099426676736
 9123146560113991650 -> 9123146560113991680
```

## How was this patch tested?

unit test added.

Author: root <root@iZbp1gsnrlfzjxh82cz80vZ.(none)>

Closes #15994 from windpiger/nafillMissupOriginalValue.

(cherry picked from commit 508de38c9928d160cf70e8e7d69ddb1dca5c1a64)
Signed-off-by: DB Tsai <dbtsai@dbtsai.com>
18 months ago[SPARK-20285][TESTS] Increase the pyspark streaming test timeout to 30 seconds
Shixiong Zhu [Mon, 10 Apr 2017 21:06:49 +0000 (14:06 -0700)] 
[SPARK-20285][TESTS] Increase the pyspark streaming test timeout to 30 seconds

## What changes were proposed in this pull request?

Saw the following failure locally:

```
Traceback (most recent call last):
  File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 351, in test_cogroup
    self._test_func(input, func, expected, sort=True, input2=input2)
  File "/home/jenkins/workspace/python/pyspark/streaming/tests.py", line 162, in _test_func
    self.assertEqual(expected, result)
AssertionError: Lists differ: [[(1, ([1], [2])), (2, ([1], [... != []

First list contains 3 additional elements.
First extra element 0:
[(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))]

+ []
- [[(1, ([1], [2])), (2, ([1], [])), (3, ([1], []))],
-  [(1, ([1, 1, 1], [])), (2, ([1], [])), (4, ([], [1]))],
-  [('', ([1, 1], [1, 2])), ('a', ([1, 1], [1, 1])), ('b', ([1], [1]))]]
```

It also happened on Jenkins: http://spark-tests.appspot.com/builds/spark-branch-2.1-test-sbt-hadoop-2.7/120

It's because when the machine is overloaded, the timeout is not enough. This PR just increases the timeout to 30 seconds.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17597 from zsxwing/SPARK-20285.

(cherry picked from commit f9a50ba2d1bfa3f55199df031e71154611ba51f6)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
18 months ago[SPARK-20280][CORE] FileStatusCache Weigher integer overflow
Bogdan Raducanu [Mon, 10 Apr 2017 19:56:21 +0000 (21:56 +0200)] 
[SPARK-20280][CORE] FileStatusCache Weigher integer overflow

## What changes were proposed in this pull request?

Weigher.weigh needs to return Int but it is possible for an Array[FileStatus] to have size > Int.maxValue. To avoid this, the size is scaled down by a factor of 32. The maximumWeight of the cache is also scaled down by the same factor.

## How was this patch tested?
New test in FileIndexSuite

Author: Bogdan Raducanu <bogdan@databricks.com>

Closes #17591 from bogdanrdc/SPARK-20280.

(cherry picked from commit f6dd8e0e1673aa491b895c1f0467655fa4e9d52f)
Signed-off-by: Herman van Hovell <hvanhovell@databricks.com>
18 months ago[SPARK-20264][SQL] asm should be non-test dependency in sql/core
Reynold Xin [Mon, 10 Apr 2017 03:32:07 +0000 (20:32 -0700)] 
[SPARK-20264][SQL] asm should be non-test dependency in sql/core

## What changes were proposed in this pull request?
sq/core module currently declares asm as a test scope dependency. Transitively it should actually be a normal dependency since the actual core module defines it. This occasionally confuses IntelliJ.

## How was this patch tested?
N/A - This is a build change.

Author: Reynold Xin <rxin@databricks.com>

Closes #17574 from rxin/SPARK-20264.

(cherry picked from commit 7bfa05e0a5e6860a942e1ce47e7890d665acdfe3)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20260][MLLIB] String interpolation required for error message
Vijay Ramesh [Sun, 9 Apr 2017 18:39:09 +0000 (19:39 +0100)] 
[SPARK-20260][MLLIB] String interpolation required for error message

## What changes were proposed in this pull request?
This error message doesn't get properly formatted because of a missing `s`.  Currently the error looks like:

```
Caused by: java.lang.IllegalArgumentException: requirement failed: indices should be one-based and in ascending order; found current=$current, previous=$previous; line="$line"
```
(note the literal `$current` instead of the interpolated value)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Vijay Ramesh <vramesh@demandbase.com>

Closes #17572 from vijaykramesh/master.

(cherry picked from commit 261eaf5149a8fe479ab4f9c34db892bcedbf5739)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20262][SQL] AssertNotNull should throw NullPointerException
Reynold Xin [Sat, 8 Apr 2017 04:14:50 +0000 (21:14 -0700)] 
[SPARK-20262][SQL] AssertNotNull should throw NullPointerException

AssertNotNull currently throws RuntimeException. It should throw NullPointerException, which is more specific.

N/A

Author: Reynold Xin <rxin@databricks.com>

Closes #17573 from rxin/SPARK-20262.

(cherry picked from commit e1afc4dcca8ba517f48200c0ecde1152505e41ec)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20246][SQL] should not push predicate down through aggregate with non-determin...
Wenchen Fan [Sat, 8 Apr 2017 03:54:18 +0000 (20:54 -0700)] 
[SPARK-20246][SQL] should not push predicate down through aggregate with non-deterministic expressions

## What changes were proposed in this pull request?

Similar to `Project`, when `Aggregate` has non-deterministic expressions, we should not push predicate down through it, as it will change the number of input rows and thus change the evaluation result of non-deterministic expressions in `Aggregate`.

## How was this patch tested?

new regression test

Author: Wenchen Fan <wenchen@databricks.com>

Closes #17562 from cloud-fan/filter.

(cherry picked from commit 7577e9c356b580d744e1fc27c645fce41bdf9cf0)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20218][DOC][APP-ID] applications//stages' in REST API,add description.
郭小龙 10207633 [Fri, 7 Apr 2017 12:03:07 +0000 (13:03 +0100)] 
[SPARK-20218][DOC][APP-ID] applications//stages' in REST API,add description.

## What changes were proposed in this pull request?

1. '/applications/[app-id]/stages' in rest api.status should add description '?status=[active|complete|pending|failed] list only stages in the state.'

Now the lack of this description, resulting in the use of this api do not know the use of the status through the brush stage list.

2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant description ‘?status=[active|complete|pending|failed] list only stages in the state.’.
Because only one stage is determined based on stage-id.

code:
  GET
  def stageList(QueryParam("status") statuses: JList[StageStatus]): Seq[StageData] = {
    val listener = ui.jobProgressListener
    val stageAndStatus = AllStagesResource.stagesAndStatus(ui)
    val adjStatuses = {
      if (statuses.isEmpty()) {
        Arrays.asList(StageStatus.values(): _*)
      } else {
        statuses
      }
    };

## How was this patch tested?

manual tests

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: 郭小龙 10207633 <guo.xiaolong1@zte.com.cn>

Closes #17534 from guoxiaolongzte/SPARK-20218.

(cherry picked from commit 9e0893b53d68f777c1f3fb0a67820424a9c253ab)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20214][ML] Make sure converted csc matrix has sorted indices
Liang-Chi Hsieh [Thu, 6 Apr 2017 00:46:44 +0000 (17:46 -0700)] 
[SPARK-20214][ML] Make sure converted csc matrix has sorted indices

## What changes were proposed in this pull request?

`_convert_to_vector` converts a scipy sparse matrix to csc matrix for initializing `SparseVector`. However, it doesn't guarantee the converted csc matrix has sorted indices and so a failure happens when you do something like that:

    from scipy.sparse import lil_matrix
    lil = lil_matrix((4, 1))
    lil[1, 0] = 1
    lil[3, 0] = 2
    _convert_to_vector(lil.todok())

    File "/home/jenkins/workspace/python/pyspark/mllib/linalg/__init__.py", line 78, in _convert_to_vector
      return SparseVector(l.shape[0], csc.indices, csc.data)
    File "/home/jenkins/workspace/python/pyspark/mllib/linalg/__init__.py", line 556, in __init__
      % (self.indices[i], self.indices[i + 1]))
    TypeError: Indices 3 and 1 are not strictly increasing

A simple test can confirm that `dok_matrix.tocsc()` won't guarantee sorted indices:

    >>> from scipy.sparse import lil_matrix
    >>> lil = lil_matrix((4, 1))
    >>> lil[1, 0] = 1
    >>> lil[3, 0] = 2
    >>> dok = lil.todok()
    >>> csc = dok.tocsc()
    >>> csc.has_sorted_indices
    0
    >>> csc.indices
    array([3, 1], dtype=int32)

I checked the source codes of scipy. The only way to guarantee it is `csc_matrix.tocsr()` and `csr_matrix.tocsc()`.

## How was this patch tested?

Existing tests.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Liang-Chi Hsieh <viirya@gmail.com>

Closes #17532 from viirya/make-sure-sorted-indices.

(cherry picked from commit 12206058e8780e202c208b92774df3773eff36ae)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
18 months ago[SPARK-20223][SQL] Fix typo in tpcds q77.sql
wangzhenhua [Wed, 5 Apr 2017 17:21:43 +0000 (10:21 -0700)] 
[SPARK-20223][SQL] Fix typo in tpcds q77.sql

## What changes were proposed in this pull request?

Fix typo in tpcds q77.sql

## How was this patch tested?

N/A

Author: wangzhenhua <wangzhenhua@huawei.com>

Closes #17538 from wzhfy/typoQ77.

(cherry picked from commit a2d8d767d933321426a4eb9df1583e017722d7d6)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20042][WEB UI] Fix log page buttons for reverse proxy mode
Oliver Köth [Wed, 5 Apr 2017 07:09:42 +0000 (08:09 +0100)] 
[SPARK-20042][WEB UI] Fix log page buttons for reverse proxy mode

with spark.ui.reverseProxy=true, full path URLs like /log will point to
the master web endpoint which is serving the worker UI as reverse proxy.
To access a REST endpoint in the worker in reverse proxy mode , the
leading /proxy/"target"/ part of the base URI must be retained.

Added logic to log-view.js to handle this, similar to executorspage.js

Patch was tested manually

Author: Oliver Köth <okoeth@de.ibm.com>

Closes #17370 from okoethibm/master.

(cherry picked from commit 6f09dc70d9808cae004ceda9ad615aa9be50f43d)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20191][YARN] Crate wrapper for RackResolver so tests can override it.
Marcelo Vanzin [Tue, 4 Apr 2017 18:38:05 +0000 (11:38 -0700)] 
[SPARK-20191][YARN] Crate wrapper for RackResolver so tests can override it.

Current test code tries to override the RackResolver used by setting
configuration params, but because YARN libs statically initialize the
resolver the first time it's used, that means that those configs don't
really take effect during Spark tests.

This change adds a wrapper class that easily allows tests to override the
behavior of the resolver for the Spark code that uses it.

Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #17508 from vanzin/SPARK-20191.

(cherry picked from commit 0736980f395f114faccbd58e78280ca63ed289c7)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
18 months ago[SPARK-20190][APP-ID] applications//jobs' in rest api,status should be [running|s…
guoxiaolongzte [Tue, 4 Apr 2017 08:56:17 +0000 (09:56 +0100)] 
[SPARK-20190][APP-ID] applications//jobs' in rest api,status should be [running|s…

…ucceeded|failed|unknown]

## What changes were proposed in this pull request?

'/applications/[app-id]/jobs' in rest api.status should be'[running|succeeded|failed|unknown]'.
now status is '[complete|succeeded|failed]'.
but '/applications/[app-id]/jobs?status=complete' the server return 'HTTP ERROR 404'.
Added '?status=running' and '?status=unknown'.
code :
public enum JobExecutionStatus {
RUNNING,
SUCCEEDED,
FAILED,
UNKNOWN;

## How was this patch tested?

 manual tests

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: guoxiaolongzte <guo.xiaolong1@zte.com.cn>

Closes #17507 from guoxiaolongzte/SPARK-20190.

(cherry picked from commit c95fbea68e9dfb2c96a1d13dde17d80a37066ae6)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[MINOR][DOCS] Replace non-breaking space to normal spaces that breaks rendering markdown
hyukjinkwon [Mon, 3 Apr 2017 09:09:11 +0000 (10:09 +0100)] 
[MINOR][DOCS] Replace non-breaking space to normal spaces that breaks rendering markdown

# What changes were proposed in this pull request?

It seems there are several non-breaking spaces were inserted into several `.md`s and they look breaking rendering markdown files.

These are different. For example, this can be checked via `python` as below:

```python
>>> " "
'\xc2\xa0'
>>> " "
' '
```

_Note that it seems this PR description automatically replaces non-breaking spaces into normal spaces. Please open a `vi` and copy and paste it into `python` to verify this (do not copy the characters here)._

I checked the output below in  Sapari and Chrome on Mac OS and, Internal Explorer on Windows 10.

**Before**

![2017-04-03 12 37 17](https://cloud.githubusercontent.com/assets/6477701/24594655/50aaba02-186a-11e7-80bb-d34b17a3398a.png)
![2017-04-03 12 36 57](https://cloud.githubusercontent.com/assets/6477701/24594654/50a855e6-186a-11e7-94e2-661e56544b0f.png)

**After**

![2017-04-03 12 36 46](https://cloud.githubusercontent.com/assets/6477701/24594657/53c2545c-186a-11e7-9a73-00529afbfd75.png)
![2017-04-03 12 36 31](https://cloud.githubusercontent.com/assets/6477701/24594658/53c286c0-186a-11e7-99c9-e66b1f510fe7.png)

## How was this patch tested?

Manually checking.

These instances were found via

```
grep --include=*.scala --include=*.python --include=*.java --include=*.r --include=*.R --include=*.md --include=*.r -r -I " " .
```

in Mac OS.

It seems there are several instances more as below:

```
./docs/sql-programming-guide.md:        │   ├── ...
./docs/sql-programming-guide.md:        │   │
./docs/sql-programming-guide.md:        │   ├── country=US
./docs/sql-programming-guide.md:        │   │   └── data.parquet
./docs/sql-programming-guide.md:        │   ├── country=CN
./docs/sql-programming-guide.md:        │   │   └── data.parquet
./docs/sql-programming-guide.md:        │   └── ...
./docs/sql-programming-guide.md:            ├── ...
./docs/sql-programming-guide.md:            │
./docs/sql-programming-guide.md:            ├── country=US
./docs/sql-programming-guide.md:            │   └── data.parquet
./docs/sql-programming-guide.md:            ├── country=CN
./docs/sql-programming-guide.md:            │   └── data.parquet
./docs/sql-programming-guide.md:            └── ...
./sql/core/src/test/README.md:│   ├── *.avdl                  # Testing Avro IDL(s)
./sql/core/src/test/README.md:│   └── *.avpr                  # !! NO TOUCH !! Protocol files generated from Avro IDL(s)
./sql/core/src/test/README.md:│   ├── gen-avro.sh             # Script used to generate Java code for Avro
./sql/core/src/test/README.md:│   └── gen-thrift.sh           # Script used to generate Java code for Thrift
```

These seems generated via `tree` command which inserts non-breaking spaces. They do not look causing any problem for rendering within code blocks and I did not fix it to reduce the overhead to manually replace it when it is overwritten via `tree` command in the future.

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #17517 from HyukjinKwon/non-breaking-space.

(cherry picked from commit 364b0db75308ddd346b4ab1e032680e8eb4c1753)
Signed-off-by: Sean Owen <sowen@cloudera.com>
18 months ago[SPARK-20197][SPARKR][BRANCH-2.1] CRAN check fail with package installation
Felix Cheung [Mon, 3 Apr 2017 02:44:14 +0000 (19:44 -0700)] 
[SPARK-20197][SPARKR][BRANCH-2.1] CRAN check fail with package installation

## What changes were proposed in this pull request?

Test failed because SPARK_HOME is not set before Spark is installed.
Also current directory is not == SPARK_HOME when tests are run with R CMD check, unlike in Jenkins, so disable that test for now. (that would also disable the test in Jenkins - so this change should not be ported to master as-is.)

## How was this patch tested?

Manual run R CMD check

Author: Felix Cheung <felixcheung_m@hotmail.com>

Closes #17515 from felixcheung/rcrancheck.

18 months ago[SPARK-19999][BACKPORT-2.1][CORE] Workaround JDK-8165231 to identify PPC64 architectu...
Kazuaki Ishizaki [Sun, 2 Apr 2017 14:33:48 +0000 (15:33 +0100)] 
[SPARK-19999][BACKPORT-2.1][CORE] Workaround JDK-8165231 to identify PPC64 architectures as supporting unaligned access

## What changes were proposed in this pull request?

This PR is backport of #17472 to Spark 2.1

java.nio.Bits.unaligned() does not return true for the ppc64le arch.
see [https://bugs.openjdk.java.net/browse/JDK-8165231](https://bugs.openjdk.java.net/browse/JDK-8165231)
Check architecture in Platform.java

## How was this patch tested?

unit test

Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>

Closes #17509 from kiszk/branch-2.1.

18 months ago[SPARK-20084][CORE] Remove internal.metrics.updatedBlockStatuses from history files.
Ryan Blue [Fri, 31 Mar 2017 16:42:49 +0000 (09:42 -0700)] 
[SPARK-20084][CORE] Remove internal.metrics.updatedBlockStatuses from history files.

## What changes were proposed in this pull request?

Remove accumulator updates for internal.metrics.updatedBlockStatuses from SparkListenerTaskEnd entries in the history file. These can cause history files to grow to hundreds of GB because the value of the accumulator contains all tracked blocks.

## How was this patch tested?

Current History UI tests cover use of the history file.

Author: Ryan Blue <blue@apache.org>

Closes #17412 from rdblue/SPARK-20084-remove-block-accumulator-info.

(cherry picked from commit c4c03eed67c05a78dc8944f6119ea708d6b955be)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
18 months ago[SPARK-20164][SQL] AnalysisException not tolerant of null query plan.
Kunal Khamar [Fri, 31 Mar 2017 16:17:22 +0000 (09:17 -0700)] 
[SPARK-20164][SQL] AnalysisException not tolerant of null query plan.

The query plan in an `AnalysisException` may be `null` when an `AnalysisException` object is serialized and then deserialized, since `plan` is marked `transient`. Or when someone throws an `AnalysisException` with a null query plan (which should not happen).
`def getMessage` is not tolerant of this and throws a `NullPointerException`, leading to loss of information about the original exception.
The fix is to add a `null` check in `getMessage`.

- Unit test

Author: Kunal Khamar <kkhamar@outlook.com>

Closes #17486 from kunalkhamar/spark-20164.

(cherry picked from commit 254877c2f04414c70d92fa0a00c0ecee1d73aba7)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
18 months ago[SPARK-20059][YARN] Use the correct classloader for HBaseCredentialProvider
jerryshao [Wed, 29 Mar 2017 17:09:58 +0000 (10:09 -0700)] 
[SPARK-20059][YARN] Use the correct classloader for HBaseCredentialProvider

## What changes were proposed in this pull request?

Currently we use system classloader to find HBase jars, if it is specified by `--jars`, then it will be failed with ClassNotFound issue. So here changing to use child classloader.

Also putting added jars and main jar into classpath of submitted application in yarn cluster mode, otherwise HBase jars specified with `--jars` will never be honored in cluster mode, and fetching tokens in client side will always be failed.

## How was this patch tested?

Unit test and local verification.

Author: jerryshao <sshao@hortonworks.com>

Closes #17388 from jerryshao/SPARK-20059.

(cherry picked from commit c622a87c44e0621e1b3024fdca9b2aa3c508615b)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
18 months ago[SPARK-20134][SQL] SQLMetrics.postDriverMetricUpdates to simplify driver side metric...
Reynold Xin [Wed, 29 Mar 2017 07:02:15 +0000 (00:02 -0700)] 
[SPARK-20134][SQL] SQLMetrics.postDriverMetricUpdates to simplify driver side metric updates

## What changes were proposed in this pull request?
It is not super intuitive how to update SQLMetric on the driver side. This patch introduces a new SQLMetrics.postDriverMetricUpdates function to do that, and adds documentation to make it more obvious.

## How was this patch tested?
Updated a test case to use this method.

Author: Reynold Xin <rxin@databricks.com>

Closes #17464 from rxin/SPARK-20134.

(cherry picked from commit 9712bd3954c029de5c828f27b57d46e4a6325a38)
Signed-off-by: Reynold Xin <rxin@databricks.com>
18 months ago[SPARK-20043][ML] DecisionTreeModel: ImpurityCalculator builder fails for uppercase...
颜发才(Yan Facai) [Tue, 28 Mar 2017 23:14:01 +0000 (16:14 -0700)] 
[SPARK-20043][ML] DecisionTreeModel: ImpurityCalculator builder fails for uppercase impurity type Gini

Fix bug: DecisionTreeModel can't recongnize Impurity "Gini" when loading

TODO:
+ [x] add unit test
+ [x] fix the bug

Author: 颜发才(Yan Facai) <facai.yan@gmail.com>

Closes #17407 from facaiy/BUG/decision_tree_loader_failer_with_Gini_impurity.

(cherry picked from commit 7d432af8f3c47973550ea253dae0c23cd2961bde)
Signed-off-by: Joseph K. Bradley <joseph@databricks.com>
18 months agoPreparing development version 2.1.2-SNAPSHOT
Patrick Wendell [Tue, 28 Mar 2017 21:29:08 +0000 (14:29 -0700)] 
Preparing development version 2.1.2-SNAPSHOT

18 months agoPreparing Spark release v2.1.1-rc2
Patrick Wendell [Tue, 28 Mar 2017 21:29:03 +0000 (14:29 -0700)] 
Preparing Spark release v2.1.1-rc2

18 months ago[SPARK-14536][SQL][BACKPORT-2.1] fix to handle null value in array type column for...
sureshthalamati [Tue, 28 Mar 2017 21:02:01 +0000 (14:02 -0700)] 
[SPARK-14536][SQL][BACKPORT-2.1] fix to handle null value in array type column for postgres.

## What changes were proposed in this pull request?
JDBC read is failing with NPE due to missing null value check for array data type if the source table has null values in the array type column. For null values Resultset.getArray() returns null.
This PR adds null safe check to the Resultset.getArray() value before invoking method on the Array object

## How was this patch tested?
Updated the PostgresIntegration test suite to test null values. Ran docker integration tests on my laptop.

Author: sureshthalamati <suresh.thalamati@gmail.com>

Closes #17460 from sureshthalamati/jdbc_array_null_fix_spark_2.1-SPARK-14536.

18 months ago[SPARK-20125][SQL] Dataset of type option of map does not work
Wenchen Fan [Tue, 28 Mar 2017 18:47:43 +0000 (11:47 -0700)] 
[SPARK-20125][SQL] Dataset of type option of map does not work

When we build the deserializer expression for map type, we will use `StaticInvoke` to call `ArrayBasedMapData.toScalaMap`, and declare the return type as `scala.collection.immutable.Map`. If the map is inside an Option, we will wrap this `StaticInvoke` with `WrapOption`, which requires the input to be `scala.collect.Map`. Ideally this should be fine, as `scala.collection.immutable.Map` extends `scala.collect.Map`, but our `ObjectType` is too strict about this, this PR fixes it.

new regression test

Author: Wenchen Fan <wenchen@databricks.com>

Closes #17454 from cloud-fan/map.

(cherry picked from commit d4fac410e0554b7ccd44be44b7ce2fe07ed7f206)
Signed-off-by: Cheng Lian <lian@databricks.com>
18 months ago[SPARK-19995][YARN] Register tokens to current UGI to avoid re-issuing of tokens...
jerryshao [Tue, 28 Mar 2017 17:41:11 +0000 (10:41 -0700)] 
[SPARK-19995][YARN] Register tokens to current UGI to avoid re-issuing of tokens in yarn client mode

## What changes were proposed in this pull request?

In the current Spark on YARN code, we will obtain tokens from provided services, but we're not going to add these tokens to the current user's credentials. This will make all the following operations to these services still require TGT rather than delegation tokens. This is unnecessary since we already got the tokens, also this will lead to failure in user impersonation scenario, because the TGT is granted by real user, not proxy user.

So here changing to put all the tokens to the current UGI, so that following operations to these services will honor tokens rather than TGT, and this will further handle the proxy user issue mentioned above.

## How was this patch tested?

Local verified in secure cluster.

vanzin tgravescs mridulm  dongjoon-hyun please help to review, thanks a lot.

Author: jerryshao <sshao@hortonworks.com>

Closes #17335 from jerryshao/SPARK-19995.

(cherry picked from commit 17eddb35a280e77da7520343e0bf2a86b329ed62)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
18 months ago[SPARK-20102] Fix nightly packaging and RC packaging scripts w/ two minor build fixes
Josh Rosen [Mon, 27 Mar 2017 17:23:28 +0000 (10:23 -0700)] 
[SPARK-20102] Fix nightly packaging and RC packaging scripts w/ two minor build fixes

## What changes were proposed in this pull request?

The master snapshot publisher builds are currently broken due to two minor build issues:

1. For unknown reasons, the LFTP `mkdir -p` command began throwing errors when the remote directory already exists. This change of behavior might have been caused by configuration changes in the ASF's SFTP server, but I'm not entirely sure of that. To work around this problem, this patch updates the script to ignore errors from the `lftp mkdir -p` commands.
2. The PySpark `setup.py` file references a non-existent `pyspark.ml.stat` module, causing Python packaging to fail by complaining about a missing directory. The fix is to simply drop that line from the setup script.

## How was this patch tested?

The LFTP fix was tested by manually running the failing commands on AMPLab Jenkins against the ASF SFTP server. The PySpark fix was tested locally.

Author: Josh Rosen <joshrosen@databricks.com>

Closes #17437 from JoshRosen/spark-20102.

(cherry picked from commit 314cf51ded52834cfbaacf58d3d05a220965ca2a)
Signed-off-by: Josh Rosen <joshrosen@databricks.com>
18 months ago[SPARK-20086][SQL] CollapseWindow should not collapse dependent adjacent windows
Herman van Hovell [Sun, 26 Mar 2017 20:47:31 +0000 (22:47 +0200)] 
[SPARK-20086][SQL] CollapseWindow should not collapse dependent adjacent windows

## What changes were proposed in this pull request?
The `CollapseWindow` is currently to aggressive when collapsing adjacent windows. It also collapses windows in the which the parent produces a column that is consumed by the child; this creates an invalid window which will fail at runtime.

This PR fixes this by adding a check for dependent adjacent windows to the `CollapseWindow` rule.

## How was this patch tested?
Added a new test case to `CollapseWindowSuite`

Author: Herman van Hovell <hvanhovell@databricks.com>

Closes #17432 from hvanhovell/SPARK-20086.

(cherry picked from commit 617ab6445ea33d8297f0691723fd19bae19228dc)
Signed-off-by: Herman van Hovell <hvanhovell@databricks.com>
18 months ago[SPARK-19674][SQL] Ignore driver accumulator updates don't belong to …
Carson Wang [Sat, 25 Mar 2017 12:36:15 +0000 (20:36 +0800)] 
[SPARK-19674][SQL] Ignore driver accumulator updates don't belong to …

[SPARK-19674][SQL] Ignore driver accumulator updates don't belong to the execution when merging all accumulator updates

N.B. This is a backport to branch-2.1 of #17009.

## What changes were proposed in this pull request?
In SQLListener.getExecutionMetrics, driver accumulator updates don't belong to the execution should be ignored when merging all accumulator updates to prevent NoSuchElementException.

## How was this patch tested?
Updated unit test.

Author: Carson Wang <carson.wangintel.com>

Author: Carson Wang <carson.wang@intel.com>

Closes #17418 from mallman/spark-19674-backport_2.1.

18 months ago[SPARK-19959][SQL] Fix to throw NullPointerException in df[java.lang.Long].collect
Kazuaki Ishizaki [Fri, 24 Mar 2017 04:57:56 +0000 (12:57 +0800)] 
[SPARK-19959][SQL] Fix to throw NullPointerException in  df[java.lang.Long].collect

## What changes were proposed in this pull request?

This PR fixes `NullPointerException` in the generated code by Catalyst. When we run the following code, we get the following `NullPointerException`. This is because there is no null checks for `inputadapter_value`  while `java.lang.Long inputadapter_value` at Line 30 may have `null`.

This happen when a type of DataFrame is nullable primitive type such as `java.lang.Long` and the wholestage codegen is used. While the physical plan keeps `nullable=true` in `input[0, java.lang.Long, true].longValue`, `BoundReference.doGenCode` ignores `nullable=true`. Thus, nullcheck code will not be generated and `NullPointerException` will occur.

This PR checks the nullability and correctly generates nullcheck if needed.
```java
sparkContext.parallelize(Seq[java.lang.Long](0L, null, 2L), 1).toDF.collect
```

```java
Caused by: java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:37)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:393)
...
```

Generated code without this PR
```java
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator inputadapter_input;
/* 009 */   private UnsafeRow serializefromobject_result;
/* 010 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder serializefromobject_holder;
/* 011 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter serializefromobject_rowWriter;
/* 012 */
/* 013 */   public GeneratedIterator(Object[] references) {
/* 014 */     this.references = references;
/* 015 */   }
/* 016 */
/* 017 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 018 */     partitionIndex = index;
/* 019 */     this.inputs = inputs;
/* 020 */     inputadapter_input = inputs[0];
/* 021 */     serializefromobject_result = new UnsafeRow(1);
/* 022 */     this.serializefromobject_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(serializefromobject_result, 0);
/* 023 */     this.serializefromobject_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(serializefromobject_holder, 1);
/* 024 */
/* 025 */   }
/* 026 */
/* 027 */   protected void processNext() throws java.io.IOException {
/* 028 */     while (inputadapter_input.hasNext() && !stopEarly()) {
/* 029 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 030 */       java.lang.Long inputadapter_value = (java.lang.Long)inputadapter_row.get(0, null);
/* 031 */
/* 032 */       boolean serializefromobject_isNull = true;
/* 033 */       long serializefromobject_value = -1L;
/* 034 */       if (!false) {
/* 035 */         serializefromobject_isNull = false;
/* 036 */         if (!serializefromobject_isNull) {
/* 037 */           serializefromobject_value = inputadapter_value.longValue();
/* 038 */         }
/* 039 */
/* 040 */       }
/* 041 */       serializefromobject_rowWriter.zeroOutNullBytes();
/* 042 */
/* 043 */       if (serializefromobject_isNull) {
/* 044 */         serializefromobject_rowWriter.setNullAt(0);
/* 045 */       } else {
/* 046 */         serializefromobject_rowWriter.write(0, serializefromobject_value);
/* 047 */       }
/* 048 */       append(serializefromobject_result);
/* 049 */       if (shouldStop()) return;
/* 050 */     }
/* 051 */   }
/* 052 */ }
```

Generated code with this PR

```java
/* 005 */ final class GeneratedIterator extends org.apache.spark.sql.execution.BufferedRowIterator {
/* 006 */   private Object[] references;
/* 007 */   private scala.collection.Iterator[] inputs;
/* 008 */   private scala.collection.Iterator inputadapter_input;
/* 009 */   private UnsafeRow serializefromobject_result;
/* 010 */   private org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder serializefromobject_holder;
/* 011 */   private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter serializefromobject_rowWriter;
/* 012 */
/* 013 */   public GeneratedIterator(Object[] references) {
/* 014 */     this.references = references;
/* 015 */   }
/* 016 */
/* 017 */   public void init(int index, scala.collection.Iterator[] inputs) {
/* 018 */     partitionIndex = index;
/* 019 */     this.inputs = inputs;
/* 020 */     inputadapter_input = inputs[0];
/* 021 */     serializefromobject_result = new UnsafeRow(1);
/* 022 */     this.serializefromobject_holder = new org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder(serializefromobject_result, 0);
/* 023 */     this.serializefromobject_rowWriter = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(serializefromobject_holder, 1);
/* 024 */
/* 025 */   }
/* 026 */
/* 027 */   protected void processNext() throws java.io.IOException {
/* 028 */     while (inputadapter_input.hasNext() && !stopEarly()) {
/* 029 */       InternalRow inputadapter_row = (InternalRow) inputadapter_input.next();
/* 030 */       boolean inputadapter_isNull = inputadapter_row.isNullAt(0);
/* 031 */       java.lang.Long inputadapter_value = inputadapter_isNull ? null : ((java.lang.Long)inputadapter_row.get(0, null));
/* 032 */
/* 033 */       boolean serializefromobject_isNull = true;
/* 034 */       long serializefromobject_value = -1L;
/* 035 */       if (!inputadapter_isNull) {
/* 036 */         serializefromobject_isNull = false;
/* 037 */         if (!serializefromobject_isNull) {
/* 038 */           serializefromobject_value = inputadapter_value.longValue();
/* 039 */         }
/* 040 */
/* 041 */       }
/* 042 */       serializefromobject_rowWriter.zeroOutNullBytes();
/* 043 */
/* 044 */       if (serializefromobject_isNull) {
/* 045 */         serializefromobject_rowWriter.setNullAt(0);
/* 046 */       } else {
/* 047 */         serializefromobject_rowWriter.write(0, serializefromobject_value);
/* 048 */       }
/* 049 */       append(serializefromobject_result);
/* 050 */       if (shouldStop()) return;
/* 051 */     }
/* 052 */   }
/* 053 */ }
```

## How was this patch tested?

Added new test suites in `DataFrameSuites`

Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>

Closes #17302 from kiszk/SPARK-19959.

(cherry picked from commit bb823ca4b479a00030c4919c2d857d254b2a44d8)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19970][SQL][BRANCH-2.1] Table owner should be USER instead of PRINCIPAL in...
Dongjoon Hyun [Thu, 23 Mar 2017 21:55:31 +0000 (14:55 -0700)] 
[SPARK-19970][SQL][BRANCH-2.1] Table owner should be USER instead of PRINCIPAL in kerberized clusters

## What changes were proposed in this pull request?

In the kerberized hadoop cluster, when Spark creates tables, the owner of tables are filled with PRINCIPAL strings instead of USER names. This is inconsistent with Hive and causes problems when using [ROLE](https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization) in Hive. We had better to fix this.

**BEFORE**
```scala
scala> sql("create table t(a int)").show
scala> sql("desc formatted t").show(false)
...
|Owner:                      |sparkEXAMPLE.COM                                         |       |
```

**AFTER**
```scala
scala> sql("create table t(a int)").show
scala> sql("desc formatted t").show(false)
...
|Owner:                      |spark                                         |       |
```

## How was this patch tested?

Manually do `create table` and `desc formatted` because this happens in Kerberized clusters.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #17363 from dongjoon-hyun/SPARK-19970-2.

19 months ago[SPARK-20021][PYSPARK] Miss backslash in python code
uncleGen [Wed, 22 Mar 2017 11:10:08 +0000 (11:10 +0000)] 
[SPARK-20021][PYSPARK] Miss backslash in python code

## What changes were proposed in this pull request?

Add backslash for line continuation in python code.

## How was this patch tested?

Jenkins.

Author: uncleGen <hustyugm@gmail.com>
Author: dylon <hustyugm@gmail.com>

Closes #17352 from uncleGen/python-example-doc.

(cherry picked from commit facfd608865c385c0dabfe09cffe5874532a9cdf)
Signed-off-by: Sean Owen <sowen@cloudera.com>
19 months ago[SPARK-19925][SPARKR] Fix SparkR spark.getSparkFiles fails when it was called on...
Yanbo Liang [Wed, 22 Mar 2017 04:50:54 +0000 (21:50 -0700)] 
[SPARK-19925][SPARKR] Fix SparkR spark.getSparkFiles fails when it was called on executors.

## What changes were proposed in this pull request?
SparkR ```spark.getSparkFiles``` fails when it was called on executors, see details at [SPARK-19925](https://issues.apache.org/jira/browse/SPARK-19925).

## How was this patch tested?
Add unit tests, and verify this fix at standalone and yarn cluster.

Author: Yanbo Liang <ybliang8@gmail.com>

Closes #17274 from yanboliang/spark-19925.

(cherry picked from commit 478fbc866fbfdb4439788583281863ecea14e8af)
Signed-off-by: Yanbo Liang <ybliang8@gmail.com>
19 months agoPreparing development version 2.1.2-SNAPSHOT
Patrick Wendell [Wed, 22 Mar 2017 01:30:07 +0000 (18:30 -0700)] 
Preparing development version 2.1.2-SNAPSHOT

19 months agoPreparing Spark release v2.1.1-rc1
Patrick Wendell [Wed, 22 Mar 2017 01:30:02 +0000 (18:30 -0700)] 
Preparing Spark release v2.1.1-rc1

19 months ago[SPARK-19980][SQL][BACKPORT-2.1] Add NULL checks in Bean serializer
Takeshi Yamamuro [Wed, 22 Mar 2017 00:37:54 +0000 (08:37 +0800)] 
[SPARK-19980][SQL][BACKPORT-2.1] Add NULL checks in Bean serializer

## What changes were proposed in this pull request?
A Bean serializer in `ExpressionEncoder`  could change values when Beans having NULL. A concrete example is as follows;
```
scala> :paste
class Outer extends Serializable {
  private var cls: Inner = _
  def setCls(c: Inner): Unit = cls = c
  def getCls(): Inner = cls
}

class Inner extends Serializable {
  private var str: String = _
  def setStr(s: String): Unit = str = str
  def getStr(): String = str
}

scala> Seq("""{"cls":null}""", """{"cls": {"str":null}}""").toDF().write.text("data")
scala> val encoder = Encoders.bean(classOf[Outer])
scala> val schema = encoder.schema
scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
scala> df.show
+------+
|   cls|
+------+
|[null]|
|  null|
+------+

scala> df.map(x => x)(encoder).show()
+------+
|   cls|
+------+
|[null]|
|[null]|     // <-- Value changed
+------+
```

This is because the Bean serializer does not have the NULL-check expressions that the serializer of Scala's product types has. Actually, this value change does not happen in Scala's product types;

```
scala> :paste
case class Outer(cls: Inner)
case class Inner(str: String)

scala> val encoder = Encoders.product[Outer]
scala> val schema = encoder.schema
scala> val df = spark.read.schema(schema).json("data").as[Outer](encoder)
scala> df.show
+------+
|   cls|
+------+
|[null]|
|  null|
+------+

scala> df.map(x => x)(encoder).show()
+------+
|   cls|
+------+
|[null]|
|  null|
+------+
```

This pr added the NULL-check expressions in Bean serializer along with the serializer of Scala's product types.

## How was this patch tested?
Added tests in `JavaDatasetSuite`.

Author: Takeshi Yamamuro <yamamuro@apache.org>

Closes #17372 from maropu/SPARK-19980-BACKPORT2.1.

19 months agoclarify array_contains function description
Will Manning [Tue, 21 Mar 2017 23:40:48 +0000 (00:40 +0100)] 
clarify array_contains function description

## What changes were proposed in this pull request?

The description in the comment for array_contains is vague/incomplete (i.e., doesn't mention that it returns `null` if the array is `null`); this PR fixes that.

## How was this patch tested?

No testing, since it merely changes a comment.

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Will Manning <lwwmanning@gmail.com>

Closes #17380 from lwwmanning/patch-1.

(cherry picked from commit a04dcde8cb191e591a5f5d7a67a5371e31e7343c)
Signed-off-by: Reynold Xin <rxin@databricks.com>
19 months ago[SPARK-19237][SPARKR][CORE] On Windows spark-submit should handle when java is not...
Felix Cheung [Tue, 21 Mar 2017 21:24:41 +0000 (14:24 -0700)] 
[SPARK-19237][SPARKR][CORE] On Windows spark-submit should handle when java is not installed

## What changes were proposed in this pull request?

When SparkR is installed as a R package there might not be any java runtime.
If it is not there SparkR's `sparkR.session()` will block waiting for the connection timeout, hanging the R IDE/shell, without any notification or message.

## How was this patch tested?

manually

- [x] need to test on Windows

Author: Felix Cheung <felixcheung_m@hotmail.com>

Closes #16596 from felixcheung/rcheckjava.

(cherry picked from commit a8877bdbba6df105740f909bc87a13cdd4440757)
Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
19 months ago[SPARK-20017][SQL] change the nullability of function 'StringToMap' from 'false'...
zhaorongsheng [Tue, 21 Mar 2017 18:30:55 +0000 (11:30 -0700)] 
[SPARK-20017][SQL] change the nullability of function 'StringToMap' from 'false' to 'true'

## What changes were proposed in this pull request?

Change the nullability of function `StringToMap` from `false` to `true`.

Author: zhaorongsheng <334362872@qq.com>

Closes #17350 from zhaorongsheng/bug-fix_strToMap_NPE.

(cherry picked from commit 7dbc162f12cc1a447c85a1a2c20d32ebb5cbeacf)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
19 months ago[SPARK-19912][SQL] String literals should be escaped for Hive metastore partition...
Dongjoon Hyun [Tue, 21 Mar 2017 04:17:26 +0000 (12:17 +0800)] 
[SPARK-19912][SQL] String literals should be escaped for Hive metastore partition pruning

## What changes were proposed in this pull request?

Since current `HiveShim`'s `convertFilters` does not escape the string literals. There exists the following correctness issues. This PR aims to return the correct result and also shows the more clear exception message.

**BEFORE**

```scala
scala> Seq((1, "p1", "q1"), (2, "p1\" and q=\"q1", "q2")).toDF("a", "p", "q").write.partitionBy("p", "q").saveAsTable("t1")

scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
+---+
|  a|
+---+
+---+

scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
java.lang.RuntimeException: Caught Hive MetaException attempting to get partition metadata by filter from ...
```

**AFTER**

```scala
scala> spark.table("t1").filter($"p" === "p1\" and q=\"q1").select($"a").show
+---+
|  a|
+---+
|  2|
+---+

scala> spark.table("t1").filter($"p" === "'\"").select($"a").show
java.lang.UnsupportedOperationException: Partition filter cannot have both `"` and `'` characters
```

## How was this patch tested?

Pass the Jenkins test with new test cases.

Author: Dongjoon Hyun <dongjoon@apache.org>

Closes #17266 from dongjoon-hyun/SPARK-19912.

(cherry picked from commit 21e366aea5a7f49e42e78dce06ff6b3ee1e36f06)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-17204][CORE] Fix replicated off heap storage
Michael Allman [Tue, 21 Mar 2017 03:51:22 +0000 (11:51 +0800)] 
[SPARK-17204][CORE] Fix replicated off heap storage

(Jira: https://issues.apache.org/jira/browse/SPARK-17204)

## What changes were proposed in this pull request?

There are a couple of bugs in the `BlockManager` with respect to support for replicated off-heap storage. First, the locally-stored off-heap byte buffer is disposed of when it is replicated. It should not be. Second, the replica byte buffers are stored as heap byte buffers instead of direct byte buffers even when the storage level memory mode is off-heap. This PR addresses both of these problems.

## How was this patch tested?

`BlockManagerReplicationSuite` was enhanced to fill in the coverage gaps. It now fails if either of the bugs in this PR exist.

Author: Michael Allman <michael@videoamp.com>

Closes #16499 from mallman/spark-17204-replicated_off_heap_storage.

(cherry picked from commit 7fa116f8fc77906202217c0cd2f9718a4e62632b)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19994][SQL] Wrong outputOrdering for right/full outer smj
wangzhenhua [Mon, 20 Mar 2017 06:37:23 +0000 (14:37 +0800)] 
[SPARK-19994][SQL] Wrong outputOrdering for right/full outer smj

## What changes were proposed in this pull request?

For right outer join, values of the left key will be filled with nulls if it can't match the value of the right key, so `nullOrdering` of the left key can't be guaranteed. We should output right key order instead of left key order.

For full outer join, neither left key nor right key guarantees `nullOrdering`. We should not output any ordering.

In tests, besides adding three test cases for left/right/full outer sort merge join, this patch also reorganizes code in `PlannerSuite` by putting together tests for `Sort`, and also extracts common logic in Sort tests into a method.

## How was this patch tested?

Corresponding test cases are added.

Author: wangzhenhua <wangzhenhua@huawei.com>
Author: Zhenhua Wang <wzh_zju@163.com>

Closes #17331 from wzhfy/wrongOrdering.

(cherry picked from commit 965a5abcff3adccc10a53b0d97d06c43934df1a2)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-18817][SPARKR][SQL] change derby log output to temp dir
Felix Cheung [Sun, 19 Mar 2017 17:37:15 +0000 (10:37 -0700)] 
[SPARK-18817][SPARKR][SQL] change derby log output to temp dir

## What changes were proposed in this pull request?

Passes R `tempdir()` (this is the R session temp dir, shared with other temp files/dirs) to JVM, set System.Property for derby home dir to move derby.log

## How was this patch tested?

Manually, unit tests

With this, these are relocated to under /tmp
```
# ls /tmp/RtmpG2M0cB/
derby.log
```
And they are removed automatically when the R session is ended.

Author: Felix Cheung <felixcheung_m@hotmail.com>

Closes #16330 from felixcheung/rderby.

(cherry picked from commit 422aa67d1bb84f913b06e6d94615adb6557e2870)
Signed-off-by: Felix Cheung <felixcheung@apache.org>
19 months ago[SQL][MINOR] Fix scaladoc for UDFRegistration
Jacek Laskowski [Sat, 18 Mar 2017 04:55:10 +0000 (21:55 -0700)] 
[SQL][MINOR] Fix scaladoc for UDFRegistration

## What changes were proposed in this pull request?

Fix scaladoc for UDFRegistration

## How was this patch tested?

local build

Author: Jacek Laskowski <jacek@japila.pl>

Closes #17337 from jaceklaskowski/udfregistration-scaladoc.

(cherry picked from commit 6326d406b98a34e9cc8afa6743b23ee1cced8611)
Signed-off-by: Reynold Xin <rxin@databricks.com>
19 months ago[SPARK-19986][TESTS] Make pyspark.streaming.tests.CheckpointTests more stable
Shixiong Zhu [Fri, 17 Mar 2017 18:12:23 +0000 (11:12 -0700)] 
[SPARK-19986][TESTS] Make pyspark.streaming.tests.CheckpointTests more stable

## What changes were proposed in this pull request?

Sometimes, CheckpointTests will hang on a busy machine because the streaming jobs are too slow and cannot catch up. I observed the scheduled delay was keeping increasing for dozens of seconds locally.

This PR increases the batch interval from 0.5 seconds to 2 seconds to generate less Spark jobs. It should make `pyspark.streaming.tests.CheckpointTests` more stable. I also replaced `sleep` with `awaitTerminationOrTimeout` so that if the streaming job fails, it will also fail the test.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17323 from zsxwing/SPARK-19986.

(cherry picked from commit 376d782164437573880f0ad58cecae1cb5f212f2)
Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
19 months ago[SPARK-19721][SS][BRANCH-2.1] Good error message for version mismatch in log files
Liwei Lin [Fri, 17 Mar 2017 17:41:17 +0000 (10:41 -0700)] 
[SPARK-19721][SS][BRANCH-2.1] Good error message for version mismatch in log files

## Problem

There are several places where we write out version identifiers in various logs for structured streaming (usually `v1`). However, in the places where we check for this, we throw a confusing error message.

## What changes were proposed in this pull request?

This patch made two major changes:
1. added a `parseVersion(...)` method, and based on this method, fixed the following places the way they did version checking (no other place needed to do this checking):
```
HDFSMetadataLog
  - CompactibleFileStreamLog  ------------> fixed with this patch
    - FileStreamSourceLog  ---------------> inherited the fix of `CompactibleFileStreamLog`
    - FileStreamSinkLog  -----------------> inherited the fix of `CompactibleFileStreamLog`
  - OffsetSeqLog  ------------------------> fixed with this patch
  - anonymous subclass in KafkaSource  ---> fixed with this patch
```

2. changed the type of `FileStreamSinkLog.VERSION`, `FileStreamSourceLog.VERSION` etc. from `String` to `Int`, so that we can identify newer versions via `version > 1` instead of `version != "v1"`
    - note this didn't break any backwards compatibility -- we are still writing out `"v1"` and reading back `"v1"`

## Exception message with this patch
```
java.lang.IllegalStateException: Failed to read log file /private/var/folders/nn/82rmvkk568sd8p3p8tb33trw0000gn/T/spark-86867b65-0069-4ef1-b0eb-d8bd258ff5b8/0. UnsupportedLogVersion: maximum supported log version is v1, but encountered v99. The log file was produced by a newer version of Spark and cannot be read by this version. Please upgrade.
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.get(HDFSMetadataLog.scala:202)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(OffsetSeqLogSuite.scala:78)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite$$anonfun$3$$anonfun$apply$mcV$sp$2.apply(OffsetSeqLogSuite.scala:75)
at org.apache.spark.sql.test.SQLTestUtils$class.withTempDir(SQLTestUtils.scala:133)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite.withTempDir(OffsetSeqLogSuite.scala:26)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite$$anonfun$3.apply$mcV$sp(OffsetSeqLogSuite.scala:75)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite$$anonfun$3.apply(OffsetSeqLogSuite.scala:75)
at org.apache.spark.sql.execution.streaming.OffsetSeqLogSuite$$anonfun$3.apply(OffsetSeqLogSuite.scala:75)
at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
```

## How was this patch tested?

unit tests

Author: Liwei Lin <lwlin7@gmail.com>

Closes #17327 from lw-lin/good-msg-2.1.

19 months ago[SPARK-19765][SPARK-18549][SPARK-19093][SPARK-19736][BACKPORT-2.1][SQL] Backport...
Xiao Li [Fri, 17 Mar 2017 02:57:53 +0000 (10:57 +0800)] 
[SPARK-19765][SPARK-18549][SPARK-19093][SPARK-19736][BACKPORT-2.1][SQL] Backport Three Cache-related PRs to Spark 2.1

### What changes were proposed in this pull request?

Backport a few cache related PRs:

---
[[SPARK-19093][SQL] Cached tables are not used in SubqueryExpression](https://github.com/apache/spark/pull/16493)

Consider the plans inside subquery expressions while looking up cache manager to make
use of cached data. Currently CacheManager.useCachedData does not consider the
subquery expressions in the plan.

---
[[SPARK-19736][SQL] refreshByPath should clear all cached plans with the specified path](https://github.com/apache/spark/pull/17064)

Catalog.refreshByPath can refresh the cache entry and the associated metadata for all dataframes (if any), that contain the given data source path.

However, CacheManager.invalidateCachedPath doesn't clear all cached plans with the specified path. It causes some strange behaviors reported in SPARK-15678.

---
[[SPARK-19765][SPARK-18549][SQL] UNCACHE TABLE should un-cache all cached plans that refer to this table](https://github.com/apache/spark/pull/17097)

When un-cache a table, we should not only remove the cache entry for this table, but also un-cache any other cached plans that refer to this table. The following commands trigger the table uncache: `DropTableCommand`, `TruncateTableCommand`, `AlterTableRenameCommand`, `UncacheTableCommand`, `RefreshTable` and `InsertIntoHiveTable`

This PR also includes some refactors:
- use java.util.LinkedList to store the cache entries, so that it's safer to remove elements while iterating
- rename invalidateCache to recacheByPlan, which is more obvious about what it does.

### How was this patch tested?
N/A

Author: Xiao Li <gatorsmile@gmail.com>

Closes #17319 from gatorsmile/backport-17097.

19 months ago[SPARK-19329][SQL][BRANCH-2.1] Reading from or writing to a datasource table with...
windpiger [Thu, 16 Mar 2017 17:30:39 +0000 (10:30 -0700)] 
[SPARK-19329][SQL][BRANCH-2.1] Reading from or writing to a datasource table with a non pre-existing location should succeed

## What changes were proposed in this pull request?

This is a backport pr of https://github.com/apache/spark/pull/16672 into branch-2.1.

## How was this patch tested?
Existing tests.

Author: windpiger <songjun@outlook.com>

Closes #17317 from windpiger/backport-insertnotexists.

19 months ago[SPARK-19872] [PYTHON] Use the correct deserializer for RDD construction for coalesce...
hyukjinkwon [Wed, 15 Mar 2017 17:17:18 +0000 (10:17 -0700)] 
[SPARK-19872] [PYTHON] Use the correct deserializer for RDD construction for coalesce/repartition

## What changes were proposed in this pull request?

This PR proposes to use the correct deserializer, `BatchedSerializer` for RDD construction for coalesce/repartition when the shuffle is enabled. Currently, it is passing `UTF8Deserializer` as is not `BatchedSerializer` from the copied one.

with the file, `text.txt` below:

```
a
b

d
e
f
g
h
i
j
k
l

```

- Before

```python
>>> sc.textFile('text.txt').repartition(1).collect()
```

```
UTF8Deserializer(True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../spark/python/pyspark/rdd.py", line 811, in collect
    return list(_load_from_socket(port, self._jrdd_deserializer))
  File ".../spark/python/pyspark/serializers.py", line 549, in load_stream
    yield self.loads(stream)
  File ".../spark/python/pyspark/serializers.py", line 544, in loads
    return s.decode("utf-8") if self.use_unicode else s
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
```

- After

```python
>>> sc.textFile('text.txt').repartition(1).collect()
```

```
[u'a', u'b', u'', u'd', u'e', u'f', u'g', u'h', u'i', u'j', u'k', u'l', u'']
```

## How was this patch tested?

Unit test in `python/pyspark/tests.py`.

Author: hyukjinkwon <gurwls223@gmail.com>

Closes #17282 from HyukjinKwon/SPARK-19872.

(cherry picked from commit 7387126f83dc0489eb1df734bfeba705709b7861)
Signed-off-by: Davies Liu <davies.liu@gmail.com>
19 months ago[SPARK-19944][SQL] Move SQLConf from sql/core to sql/catalyst (branch-2.1)
Reynold Xin [Wed, 15 Mar 2017 12:07:20 +0000 (13:07 +0100)] 
[SPARK-19944][SQL] Move SQLConf from sql/core to sql/catalyst (branch-2.1)

## What changes were proposed in this pull request?
This patch moves SQLConf from sql/core to sql/catalyst. To minimize the changes, the patch used type alias to still keep CatalystConf (as a type alias) and SimpleCatalystConf (as a concrete class that extends SQLConf).

Motivation for the change is that it is pretty weird to have SQLConf only in sql/core and then we have to duplicate config options that impact optimizer/analyzer in sql/catalyst using CatalystConf.

This is a backport into branch-2.1 to minimize merge conflicts.

## How was this patch tested?
N/A

Author: Reynold Xin <rxin@databricks.com>

Closes #17301 from rxin/branch-2.1-conf.

19 months ago[SPARK-19887][SQL] dynamic partition keys can be null or empty string
Wenchen Fan [Wed, 15 Mar 2017 00:24:41 +0000 (08:24 +0800)] 
[SPARK-19887][SQL] dynamic partition keys can be null or empty string

When dynamic partition value is null or empty string, we should write the data to a directory like `a=__HIVE_DEFAULT_PARTITION__`, when we read the data back, we should respect this special directory name and treat it as null.

This is the same behavior of impala, see https://issues.apache.org/jira/browse/IMPALA-252

new regression test

Author: Wenchen Fan <wenchen@databricks.com>

Closes #17277 from cloud-fan/partition.

(cherry picked from commit dacc382f0c918f1ca808228484305ce0e21c705e)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19933][SQL] Do not change output of a subquery
Herman van Hovell [Tue, 14 Mar 2017 17:52:16 +0000 (18:52 +0100)] 
[SPARK-19933][SQL] Do not change output of a subquery

## What changes were proposed in this pull request?
The `RemoveRedundantAlias` rule can change the output attributes (the expression id's to be precise) of a query by eliminating the redundant alias producing them. This is no problem for a regular query, but can cause problems for correlated subqueries: The attributes produced by the subquery are used in the parent plan; changing them will break the parent plan.

This PR fixes this by wrapping a subquery in a `Subquery` top level node when it gets optimized. The `RemoveRedundantAlias` rule now recognizes `Subquery` and makes sure that the output attributes of the `Subquery` node are retained.

## How was this patch tested?
Added a test case to `RemoveRedundantAliasAndProjectSuite` and added a regression test to `SubquerySuite`.

Author: Herman van Hovell <hvanhovell@databricks.com>

Closes #17278 from hvanhovell/SPARK-19933.

(cherry picked from commit e04c05cf41a125b0526f59f9b9e7fdf0b78b8b21)
Signed-off-by: Herman van Hovell <hvanhovell@databricks.com>
19 months ago[SPARK-19853][SS] uppercase kafka topics fail when startingOffsets are SpecificOffsets
uncleGen [Mon, 13 Mar 2017 00:46:31 +0000 (17:46 -0700)] 
[SPARK-19853][SS] uppercase kafka topics fail when startingOffsets are SpecificOffsets

When using the KafkaSource with Structured Streaming, consumer assignments are not what the user expects if startingOffsets is set to an explicit set of topics/partitions in JSON where the topic(s) happen to have uppercase characters. When StartingOffsets is constructed, the original string value from options is transformed toLowerCase to make matching on "earliest" and "latest" case insensitive. However, the toLowerCase JSON is passed to SpecificOffsets for the terminal condition, so topic names may not be what the user intended by the time assignments are made with the underlying KafkaConsumer.

KafkaSourceProvider.scala:
```
val startingOffsets = caseInsensitiveParams.get(STARTING_OFFSETS_OPTION_KEY).map(_.trim.toLowerCase) match {
    case Some("latest") => LatestOffsets
    case Some("earliest") => EarliestOffsets
    case Some(json) => SpecificOffsets(JsonUtils.partitionOffsets(json))
    case None => LatestOffsets
  }
```

Thank cbowden for reporting.

Jenkins

Author: uncleGen <hustyugm@gmail.com>

Closes #17209 from uncleGen/SPARK-19853.

(cherry picked from commit 0a4d06a7c3db9fec2b6f050a631e8b59b0e9376e)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[DOCS][SS] fix structured streaming python example
uncleGen [Sun, 12 Mar 2017 08:29:37 +0000 (08:29 +0000)] 
[DOCS][SS] fix structured streaming python example

## What changes were proposed in this pull request?

- SS python example: `TypeError: 'xxx' object is not callable`
- some other doc issue.

## How was this patch tested?

Jenkins.

Author: uncleGen <hustyugm@gmail.com>

Closes #17257 from uncleGen/docs-ss-python.

(cherry picked from commit e29a74d5b1fa3f9356b7af5dd7e3fce49bc8eb7d)
Signed-off-by: Sean Owen <sowen@cloudera.com>
19 months ago[SPARK-19611][SQL] Introduce configurable table schema inference
Budde [Sat, 11 Mar 2017 00:38:16 +0000 (16:38 -0800)] 
[SPARK-19611][SQL] Introduce configurable table schema inference

Add a new configuration option that allows Spark SQL to infer a case-sensitive schema from a Hive Metastore table's data files when a case-sensitive schema can't be read from the table properties.

- Add spark.sql.hive.caseSensitiveInferenceMode param to SQLConf
- Add schemaPreservesCase field to CatalogTable (set to false when schema can't
  successfully be read from Hive table props)
- Perform schema inference in HiveMetastoreCatalog if schemaPreservesCase is
  false, depending on spark.sql.hive.caseSensitiveInferenceMode
- Add alterTableSchema() method to the ExternalCatalog interface
- Add HiveSchemaInferenceSuite tests
- Refactor and move ParquetFileForamt.meregeMetastoreParquetSchema() as
  HiveMetastoreCatalog.mergeWithMetastoreSchema
- Move schema merging tests from ParquetSchemaSuite to HiveSchemaInferenceSuite

[JIRA for this change](https://issues.apache.org/jira/browse/SPARK-19611)

The tests in ```HiveSchemaInferenceSuite``` should verify that schema inference is working as expected. ```ExternalCatalogSuite``` has also been extended to cover the new ```alterTableSchema()``` API.

Author: Budde <budde@amazon.com>

Closes #17229 from budde/SPARK-19611-2.1.

19 months ago[SPARK-19893][SQL] should not run DataFrame set oprations with map type
Wenchen Fan [Sat, 11 Mar 2017 00:14:22 +0000 (16:14 -0800)] 
[SPARK-19893][SQL] should not run DataFrame set oprations with map type

In spark SQL, map type can't be used in equality test/comparison, and `Intersect`/`Except`/`Distinct` do need equality test for all columns, we should not allow map type in `Intersect`/`Except`/`Distinct`.

new regression test

Author: Wenchen Fan <wenchen@databricks.com>

Closes #17236 from cloud-fan/map.

(cherry picked from commit fb9beda54622e0c3190c6504fc468fa4e50eeb45)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19891][SS] Await Batch Lock notified on stream execution exit
Tyson Condie [Fri, 10 Mar 2017 07:02:13 +0000 (23:02 -0800)] 
[SPARK-19891][SS] Await Batch Lock notified on stream execution exit

## What changes were proposed in this pull request?

We need to notify the await batch lock when the stream exits early e.g., when an exception has been thrown.

## How was this patch tested?

Current tests that throw exceptions at runtime will finish faster as a result of this update.

zsxwing

Please review http://spark.apache.org/contributing.html before opening a pull request.

Author: Tyson Condie <tcondie@gmail.com>

Closes #17231 from tcondie/kafka-writer.

(cherry picked from commit 501b7111997bc74754663348967104181b43319b)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[SPARK-19886] Fix reportDataLoss if statement in SS KafkaSource
Burak Yavuz [Fri, 10 Mar 2017 01:42:10 +0000 (17:42 -0800)] 
[SPARK-19886] Fix reportDataLoss if statement in SS KafkaSource

## What changes were proposed in this pull request?

Fix the `throw new IllegalStateException` if statement part.

## How is this patch tested

Regression test

Author: Burak Yavuz <brkyvz@gmail.com>

Closes #17228 from brkyvz/kafka-cause-fix.

(cherry picked from commit 82138e09b9ad8d9609d5c64d6c11244b8f230be7)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[SPARK-19861][SS] watermark should not be a negative time.
uncleGen [Thu, 9 Mar 2017 19:07:31 +0000 (11:07 -0800)] 
[SPARK-19861][SS] watermark should not be a negative time.

## What changes were proposed in this pull request?

`watermark` should not be negative. This behavior is invalid, check it before real run.

## How was this patch tested?

add new unit test.

Author: uncleGen <hustyugm@gmail.com>
Author: dylon <hustyugm@gmail.com>

Closes #17202 from uncleGen/SPARK-19861.

(cherry picked from commit 30b18e69361746b4d656474374d8b486bb48a19e)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[SPARK-19561][SQL] add int case handling for TimestampType
Jason White [Thu, 9 Mar 2017 18:34:54 +0000 (10:34 -0800)] 
[SPARK-19561][SQL] add int case handling for TimestampType

## What changes were proposed in this pull request?

Add handling of input of type `Int` for dataType `TimestampType` to `EvaluatePython.scala`. Py4J serializes ints smaller than MIN_INT or larger than MAX_INT to Long, which are handled correctly already, but values between MIN_INT and MAX_INT are serialized to Int.

These range limits correspond to roughly half an hour on either side of the epoch. As a result, PySpark doesn't allow TimestampType values to be created in this range.

Alternatives attempted: patching the `TimestampType.toInternal` function to cast return values to `long`, so Py4J would always serialize them to Scala Long. Python3 does not have a `long` type, so this approach failed on Python3.

## How was this patch tested?

Added a new PySpark-side test that fails without the change.

The contribution is my original work and I license the work to the project under the project’s open source license.

Resubmission of https://github.com/apache/spark/pull/16896. The original PR didn't go through Jenkins and broke the build. davies dongjoon-hyun

cloud-fan Could you kick off a Jenkins run for me? It passed everything for me locally, but it's possible something has changed in the last few weeks.

Author: Jason White <jason.white@shopify.com>

Closes #17200 from JasonMWhite/SPARK-19561.

(cherry picked from commit 206030bd12405623c00c1ff334663984b9250adb)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19859][SS][FOLLOW-UP] The new watermark should override the old one.
uncleGen [Thu, 9 Mar 2017 07:23:10 +0000 (23:23 -0800)] 
[SPARK-19859][SS][FOLLOW-UP] The new watermark should override the old one.

## What changes were proposed in this pull request?

A follow up to SPARK-19859:

- extract the calculation of `delayMs` and reuse it.
- update EventTimeWatermarkExec
- use the correct `delayMs` in EventTimeWatermark

## How was this patch tested?

Jenkins.

Author: uncleGen <hustyugm@gmail.com>

Closes #17221 from uncleGen/SPARK-19859.

(cherry picked from commit eeb1d6db878641d9eac62d0869a90fe80c1f4461)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[SPARK-19874][BUILD] Hide API docs for org.apache.spark.sql.internal
Shixiong Zhu [Thu, 9 Mar 2017 07:15:52 +0000 (23:15 -0800)] 
[SPARK-19874][BUILD] Hide API docs for org.apache.spark.sql.internal

## What changes were proposed in this pull request?

The API docs should not include the "org.apache.spark.sql.internal" package because they are internal private APIs.

## How was this patch tested?

Jenkins

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17217 from zsxwing/SPARK-19874.

(cherry picked from commit 029e40b412e332c9f0fff283d604e203066c78c0)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months ago[MINOR][SQL] The analyzer rules are fired twice for cases when AnalysisException...
Dilip Biswal [Thu, 9 Mar 2017 01:33:49 +0000 (17:33 -0800)] 
[MINOR][SQL] The analyzer rules are fired twice for cases when AnalysisException is raised from analyzer.

## What changes were proposed in this pull request?
In general we have a checkAnalysis phase which validates the logical plan and throws AnalysisException on semantic errors. However we also can throw AnalysisException from a few analyzer rules like ResolveSubquery.

I found that we fire up the analyzer rules twice for the queries that throw AnalysisException from one of the analyzer rules. This is a very minor fix. We don't have to strictly fix it. I just got confused seeing the rule getting fired two times when i was not expecting it.

## How was this patch tested?

Tested manually.

Author: Dilip Biswal <dbiswal@us.ibm.com>

Closes #17214 from dilipbiswal/analyis_twice.

(cherry picked from commit d809ceed9762d5bbb04170e45f38751713112dd8)
Signed-off-by: Xiao Li <gatorsmile@gmail.com>
19 months agoRevert "[SPARK-19413][SS] MapGroupsWithState for arbitrary stateful operations for...
Shixiong Zhu [Wed, 8 Mar 2017 22:30:54 +0000 (14:30 -0800)] 
Revert "[SPARK-19413][SS] MapGroupsWithState for arbitrary stateful operations for branch-2.1"

This reverts commit 502c927b8c8a99ef2adf4e6e1d7a6d9232d45ef5.

19 months ago[SPARK-19813] maxFilesPerTrigger combo latestFirst may miss old files in combination...
Burak Yavuz [Wed, 8 Mar 2017 22:35:07 +0000 (14:35 -0800)] 
[SPARK-19813] maxFilesPerTrigger combo latestFirst may miss old files in combination with maxFileAge in FileStreamSource

## What changes were proposed in this pull request?

**The Problem**
There is a file stream source option called maxFileAge which limits how old the files can be, relative the latest file that has been seen. This is used to limit the files that need to be remembered as "processed". Files older than the latest processed files are ignored. This values is by default 7 days.
This causes a problem when both
latestFirst = true
maxFilesPerTrigger > total files to be processed.
Here is what happens in all combinations
1) latestFirst = false - Since files are processed in order, there wont be any unprocessed file older than the latest processed file. All files will be processed.
2) latestFirst = true AND maxFilesPerTrigger is not set - The maxFileAge thresholding mechanism takes one batch initialize. If maxFilesPerTrigger is not, then all old files get processed in the first batch, and so no file is left behind.
3) latestFirst = true AND maxFilesPerTrigger is set to X - The first batch process the latest X files. That sets the threshold latest file - maxFileAge, so files older than this threshold will never be considered for processing.
The bug is with case 3.

**The Solution**

Ignore `maxFileAge` when both `maxFilesPerTrigger` and `latestFirst` are set.

## How was this patch tested?

Regression test in `FileStreamSourceSuite`

Author: Burak Yavuz <brkyvz@gmail.com>

Closes #17153 from brkyvz/maxFileAge.

(cherry picked from commit a3648b5d4f99ff9461d02f53e9ec71787a3abf51)
Signed-off-by: Burak Yavuz <brkyvz@gmail.com>
19 months ago[SPARK-18055][SQL] Use correct mirror in ExpresionEncoder
Michael Armbrust [Wed, 8 Mar 2017 09:32:42 +0000 (01:32 -0800)] 
[SPARK-18055][SQL] Use correct mirror in ExpresionEncoder

Previously, we were using the mirror of passed in `TypeTag` when reflecting to build an encoder.  This fails when the outer class is built in (i.e. `Seq`'s default mirror is based on root classloader) but inner classes (i.e. `A` in `Seq[A]`) are defined in the REPL or a library.

This patch changes us to always reflect based on a mirror created using the context classloader.

Author: Michael Armbrust <michael@databricks.com>

Closes #17201 from marmbrus/replSeqEncoder.

(cherry picked from commit 314e48a3584bad4b486b046bbf0159d64ba857bc)
Signed-off-by: Wenchen Fan <wenchen@databricks.com>
19 months ago[SPARK-19348][PYTHON] PySpark keyword_only decorator is not thread-safe
Bryan Cutler [Wed, 8 Mar 2017 04:44:30 +0000 (20:44 -0800)] 
[SPARK-19348][PYTHON] PySpark keyword_only decorator is not thread-safe

## What changes were proposed in this pull request?
The `keyword_only` decorator in PySpark is not thread-safe.  It writes kwargs to a static class variable in the decorator, which is then retrieved later in the class method as `_input_kwargs`.  If multiple threads are constructing the same class with different kwargs, it becomes a race condition to read from the static class variable before it's overwritten.  See [SPARK-19348](https://issues.apache.org/jira/browse/SPARK-19348) for reproduction code.

This change will write the kwargs to a member variable so that multiple threads can operate on separate instances without the race condition.  It does not protect against multiple threads operating on a single instance, but that is better left to the user to synchronize.

## How was this patch tested?
Added new unit tests for using the keyword_only decorator and a regression test that verifies `_input_kwargs` can be overwritten from different class instances.

Author: Bryan Cutler <cutlerb@gmail.com>

Closes #17193 from BryanCutler/pyspark-keyword_only-threadsafe-SPARK-19348-2_1.

19 months ago[SPARK-19859][SS] The new watermark should override the old one
Shixiong Zhu [Wed, 8 Mar 2017 04:34:55 +0000 (20:34 -0800)] 
[SPARK-19859][SS] The new watermark should override the old one

## What changes were proposed in this pull request?

The new watermark should override the old one. Otherwise, we just pick up the first column which has a watermark, it may be unexpected.

## How was this patch tested?

The new test.

Author: Shixiong Zhu <shixiong@databricks.com>

Closes #17199 from zsxwing/SPARK-19859.

(cherry picked from commit d8830c5039d9c7c5ef03631904c32873ab558e22)
Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
19 months agoRevert "[SPARK-19561] [PYTHON] cast TimestampType.toInternal output to long"
Wenchen Fan [Wed, 8 Mar 2017 01:15:39 +0000 (17:15 -0800)] 
Revert "[SPARK-19561] [PYTHON] cast TimestampType.toInternal output to long"

This reverts commit 6f4684622a951806bebe7652a14f7d1ce03e24c7.

19 months ago[SPARK-19857][YARN] Correctly calculate next credential update time.
Marcelo Vanzin [Wed, 8 Mar 2017 00:21:18 +0000 (16:21 -0800)] 
[SPARK-19857][YARN] Correctly calculate next credential update time.

Add parentheses so that both lines form a single statement; also add
a log message so that the issue becomes more explicit if it shows up
again.

Tested manually with integration test that exercises the feature.

Author: Marcelo Vanzin <vanzin@cloudera.com>

Closes #17198 from vanzin/SPARK-19857.

(cherry picked from commit 8e41c2eed873e215b13215844ba5ba73a8906c5b)
Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>