Skip to content
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@

package org.apache.spark.sql.types

import java.lang.{Long => JLong}
import java.math.{BigInteger, MathContext, RoundingMode}

import org.apache.spark.annotation.DeveloperApi
Expand Down Expand Up @@ -132,17 +133,15 @@ final class Decimal extends Ordered[Decimal] with Serializable {
* Set this Decimal to the given BigInteger value. Will have precision 38 and scale 0.
*/
def set(bigintval: BigInteger): Decimal = {
try {
this.decimalVal = null
this.longVal = bigintval.longValueExact()
this._precision = DecimalType.MAX_PRECISION
this._scale = 0
this
}
catch {
case e: ArithmeticException =>
throw new IllegalArgumentException(s"BigInteger ${bigintval} too large for decimal")
}
// TODO: Remove this once we migrate to java8 and use longValueExact() instead.
require(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why doing this as a require intsead of try catch?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now the error is not detected by catching an exception but by manually checking the value. Require is the simplest way to get the desired result which is an IAE.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this have a potential perf hit? Also - is this actually related to the problem this ticket is solving?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is we can't call a method that will throw an exception if the value is out of bounds during conversion. We just check it manually. I actually expect this is faster but it is vanishingly small as a difference. I don't think there is a choice if Java 7 is being supported.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok. Then well - we have to do this.

bigintval.compareTo(LONG_MAX_BIG_INT) <= 0 && bigintval.compareTo(LONG_MIN_BIG_INT) >= 0,
s"BigInteger $bigintval too large for decimal")
this.decimalVal = null
this.longVal = bigintval.longValue()
this._precision = DecimalType.MAX_PRECISION
this._scale = 0
this
}

/**
Expand Down Expand Up @@ -174,7 +173,7 @@ final class Decimal extends Ordered[Decimal] with Serializable {

def toScalaBigInt: BigInt = BigInt(toLong)

def toJavaBigInteger: java.math.BigInteger = java.math.BigInteger.valueOf(toLong)
def toJavaBigInteger: BigInteger = BigInteger.valueOf(toLong)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I support removing the redundant qualified name, I wonder if we should also un-qualify scala.math.BigInt, or else leave both qualified even though they're not ambiguous. BigDecimal is used unqualified while the Java version is always qualified, to disambiguate. I guess it would become consistent if BigInt were not qualified.


def toUnscaledLong: Long = {
if (decimalVal.ne(null)) {
Expand Down Expand Up @@ -382,6 +381,9 @@ object Decimal {
private[sql] val ZERO = Decimal(0)
private[sql] val ONE = Decimal(1)

private[types] val LONG_MAX_BIG_INT = BigInteger.valueOf(JLong.MAX_VALUE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can just be private

private[types] val LONG_MIN_BIG_INT = BigInteger.valueOf(JLong.MIN_VALUE)

def apply(value: Double): Decimal = new Decimal().set(value)

def apply(value: Long): Decimal = new Decimal().set(value)
Expand All @@ -392,9 +394,9 @@ object Decimal {

def apply(value: java.math.BigDecimal): Decimal = new Decimal().set(value)

def apply(value: java.math.BigInteger): Decimal = new Decimal().set(value)
def apply(value: BigInteger): Decimal = new Decimal().set(value)

def apply(value: scala.math.BigInt): Decimal = new Decimal().set(value.bigInteger)
def apply(value: BigInt): Decimal = new Decimal().set(value.bigInteger)

def apply(value: BigDecimal, precision: Int, scale: Int): Decimal =
new Decimal().set(value, precision, scale)
Expand All @@ -412,8 +414,8 @@ object Decimal {
value match {
case j: java.math.BigDecimal => apply(j)
case d: BigDecimal => apply(d)
case k: scala.math.BigInt => apply(k)
case l: java.math.BigInteger => apply(l)
case k: BigInt => apply(k)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is too confusing. please keep the qualifiers. i'd actually keep qualifiers everywhere.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code is inconsistent in this regard though. I thought it worth cleaning up but don't feel strongly about it. Alternative is to qualify both classes everywhere consistently

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's better to make it consistent, but let's not mix it into a simple fix for java 7 ...

case l: BigInteger => apply(l)
case d: Decimal => d
}
}
Expand Down