Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
103 commits
Select commit Hold shift + click to select a range
c32fad0
spelling: actual
jsoref Nov 11, 2020
7a041bd
spelling: address
jsoref Nov 11, 2020
4ee2f69
spelling: against
jsoref Nov 11, 2020
5eeb636
spelling: algorithms
jsoref Nov 11, 2020
5160398
spelling: alternative
jsoref Nov 11, 2020
4af7ec0
spelling: avoid
jsoref Nov 11, 2020
1097861
spelling: cannot
jsoref Nov 11, 2020
ab19ac2
spelling: centers
jsoref Nov 11, 2020
326b346
spelling: checkpointof
jsoref Nov 11, 2020
d80f112
spelling: claim
jsoref Nov 11, 2020
b331088
spelling: cloudpickle
jsoref Nov 11, 2020
6e38abb
spelling: cloudpickler
jsoref Nov 12, 2020
d30b24e
spelling: column
jsoref Nov 11, 2020
bc798f6
spelling: combination
jsoref Nov 11, 2020
b523262
spelling: combinations
jsoref Nov 11, 2020
f2d8d7d
spelling: compatibility
jsoref Nov 11, 2020
fae65af
spelling: compilation
jsoref Nov 11, 2020
5b1406f
spelling: component
jsoref Nov 11, 2020
afbe1e6
spelling: components
jsoref Nov 11, 2020
779cc9f
spelling: compress
jsoref Nov 11, 2020
072ee1b
spelling: concatenates
jsoref Nov 11, 2020
6882dca
spelling: concatenating
jsoref Nov 11, 2020
1382dce
spelling: confidence
jsoref Nov 11, 2020
ed8d7aa
spelling: configurations
jsoref Nov 11, 2020
5c4c91d
spelling: conjunction
jsoref Nov 11, 2020
b0774f3
spelling: connection
jsoref Nov 11, 2020
d5025b5
spelling: contains
jsoref Nov 11, 2020
73a7da5
spelling: converting
jsoref Nov 11, 2020
a84552e
spelling: corresponding
jsoref Nov 11, 2020
5595a09
spelling: crypto
jsoref Nov 11, 2020
651fe9c
spelling: datasource
jsoref Nov 11, 2020
31b8e14
spelling: dependencies
jsoref Nov 11, 2020
3d04958
spelling: described
jsoref Nov 11, 2020
57114f1
spelling: directory
jsoref Nov 11, 2020
379316d
spelling: dispatch
jsoref Nov 11, 2020
bd02515
spelling: do not
jsoref Nov 11, 2020
68b7756
spelling: does not
jsoref Nov 11, 2020
62c3f3a
spelling: don't
jsoref Nov 11, 2020
335daf3
spelling: dynamic
jsoref Nov 11, 2020
c9f83c2
spelling: e.g.
jsoref Nov 11, 2020
dcc899d
spelling: eagerly
jsoref Nov 11, 2020
23ca389
spelling: environment
jsoref Nov 11, 2020
c49101b
spelling: exclusion
jsoref Nov 11, 2020
0c4228e
spelling: external
jsoref Nov 11, 2020
cdec024
spelling: github
jsoref Nov 11, 2020
12268ee
spelling: groupby
jsoref Nov 11, 2020
993d10d
spelling: grouped
jsoref Nov 11, 2020
457b52c
spelling: grouping
jsoref Nov 11, 2020
ec0fa84
spelling: i.e.
jsoref Nov 11, 2020
a962c93
spelling: impurity
jsoref Nov 11, 2020
8e27671
spelling: initialized
jsoref Nov 11, 2020
c08cf69
spelling: insertion
jsoref Nov 11, 2020
a5e5a9e
spelling: jarray
jsoref Nov 11, 2020
c87eceb
spelling: large
jsoref Nov 11, 2020
f705995
spelling: literal
jsoref Nov 11, 2020
0cb4192
spelling: managed
jsoref Nov 11, 2020
05d9690
spelling: millis
jsoref Nov 11, 2020
4034350
spelling: natural
jsoref Nov 11, 2020
d55899c
spelling: non deterministic
jsoref Nov 11, 2020
3099ce0
spelling: not
jsoref Nov 11, 2020
3aedf4e
spelling: nullable
jsoref Nov 11, 2020
611c939
spelling: numbers
jsoref Nov 11, 2020
ee82aff
spelling: occurred
jsoref Nov 11, 2020
721a2f0
spelling: optimize
jsoref Nov 11, 2020
257a6e0
spelling: panel
jsoref Nov 11, 2020
9b86884
spelling: parallelism
jsoref Nov 11, 2020
6c12d52
spelling: parallelize
jsoref Nov 11, 2020
bb7f8ee
spelling: parameter
jsoref Nov 11, 2020
ee02a85
spelling: partitioner
jsoref Nov 11, 2020
c071341
spelling: persistent
jsoref Nov 11, 2020
6ab889a
spelling: position
jsoref Nov 11, 2020
59e50b1
spelling: preemption
jsoref Nov 11, 2020
a0ef0ca
spelling: preferred
jsoref Nov 11, 2020
447bc29
spelling: progress
jsoref Nov 11, 2020
060d36c
spelling: pycharm
jsoref Nov 11, 2020
6b0dd91
spelling: randomly
jsoref Nov 11, 2020
0c713fc
spelling: reconstruct
jsoref Nov 11, 2020
cbbd9fb
spelling: repository
jsoref Nov 11, 2020
e8bd1ac
spelling: reuses
jsoref Nov 11, 2020
ae4ecd0
spelling: search
jsoref Nov 11, 2020
6e4237d
spelling: selector
jsoref Nov 11, 2020
3910637
spelling: sequential
jsoref Nov 11, 2020
92e6efa
spelling: spark
jsoref Nov 11, 2020
f2150fe
spelling: specified
jsoref Nov 11, 2020
5b1d6af
spelling: state
jsoref Nov 11, 2020
fc076be
spelling: stream
jsoref Nov 11, 2020
366209e
spelling: struct
jsoref Nov 11, 2020
dd21a5e
spelling: subclassing
jsoref Nov 11, 2020
a56a2e1
spelling: subscriber
jsoref Nov 11, 2020
c3acb51
spelling: succeeded
jsoref Nov 11, 2020
a874671
spelling: suppress
jsoref Nov 11, 2020
b2a8e61
spelling: temporary
jsoref Nov 11, 2020
77841da
spelling: the
jsoref Nov 11, 2020
7db5761
spelling: tracked
jsoref Nov 11, 2020
e829cbe
spelling: transferred
jsoref Nov 11, 2020
710b579
spelling: unencrypted
jsoref Nov 11, 2020
3aed244
spelling: unsigned
jsoref Nov 11, 2020
e0886bc
spelling: uploaded
jsoref Nov 11, 2020
581c627
spelling: uploading
jsoref Nov 11, 2020
4041ea7
spelling: visited
jsoref Nov 11, 2020
69349c7
spelling: warning
jsoref Nov 11, 2020
c5a0478
spelling: without
jsoref Nov 11, 2020
070e6bb
spelling: written
jsoref Nov 11, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion R/CRAN_RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To release SparkR as a package to CRAN, we would use the `devtools` package. Ple

First, check that the `Version:` field in the `pkg/DESCRIPTION` file is updated. Also, check for stale files not under source control.

Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, eg. `yum -q -y install qpdf`).
Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, e.g. `yum -q -y install qpdf`).

To upload a release, we would need to update the `cran-comments.md`. This should generally contain the results from running the `check-cran.sh` script along with comments on status of all `WARNING` (should not be any) or `NOTE`. As a part of `check-cran.sh` and the release process, the vignettes is build - make sure `SPARK_HOME` is set and Spark jars are accessible.

Expand Down
2 changes: 1 addition & 1 deletion R/install-dev.bat
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ MKDIR %SPARK_HOME%\R\lib

rem When you pass the package path directly as an argument to R CMD INSTALL,
rem it takes the path as 'C:\projects\spark\R\..\R\pkg"' as an example at
rem R 4.0. To work around this, directly go to the directoy and install it.
rem R 4.0. To work around this, directly go to the directory and install it.
rem See also SPARK-32074
pushd %SPARK_HOME%\R\pkg\
R.exe CMD INSTALL --library="%SPARK_HOME%\R\lib" .
Expand Down
6 changes: 3 additions & 3 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -2772,7 +2772,7 @@ setMethod("merge",
#' Creates a list of columns by replacing the intersected ones with aliases
#'
#' Creates a list of columns by replacing the intersected ones with aliases.
#' The name of the alias column is formed by concatanating the original column name and a suffix.
#' The name of the alias column is formed by concatenating the original column name and a suffix.
#'
#' @param x a SparkDataFrame
#' @param intersectedColNames a list of intersected column names of the SparkDataFrame
Expand Down Expand Up @@ -3231,7 +3231,7 @@ setMethod("describe",
#' \item stddev
#' \item min
#' \item max
#' \item arbitrary approximate percentiles specified as a percentage (eg, "75\%")
#' \item arbitrary approximate percentiles specified as a percentage (e.g., "75\%")
#' }
#' If no statistics are given, this function computes count, mean, stddev, min,
#' approximate quartiles (percentiles at 25\%, 50\%, and 75\%), and max.
Expand Down Expand Up @@ -3743,7 +3743,7 @@ setMethod("histogram",
#'
#' @param x a SparkDataFrame.
#' @param url JDBC database url of the form \code{jdbc:subprotocol:subname}.
#' @param tableName yhe name of the table in the external database.
#' @param tableName the name of the table in the external database.
#' @param mode one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
#' save mode (it is 'error' by default)
#' @param ... additional JDBC database connection properties.
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/RDD.R
Original file line number Diff line number Diff line change
Expand Up @@ -970,7 +970,7 @@ setMethod("takeSample", signature(x = "RDD", withReplacement = "logical",
MAXINT)))))
# If the first sample didn't turn out large enough, keep trying to
# take samples; this shouldn't happen often because we use a big
# multiplier for thei initial size
# multiplier for the initial size
while (length(samples) < total)
samples <- collectRDD(sampleRDD(x, withReplacement, fraction,
as.integer(ceiling(stats::runif(1,
Expand Down Expand Up @@ -1512,7 +1512,7 @@ setMethod("glom",
#'
#' @param x An RDD.
#' @param y An RDD.
#' @return a new RDD created by performing the simple union (witout removing
#' @return a new RDD created by performing the simple union (without removing
#' duplicates) of two input RDDs.
#' @examples
#'\dontrun{
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/SQLContext.R
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ getSchema <- function(schema, firstRow = NULL, rdd = NULL) {
})
}

# SPAKR-SQL does not support '.' in column name, so replace it with '_'
# SPARK-SQL does not support '.' in column name, so replace it with '_'
# TODO(davies): remove this once SPARK-2775 is fixed
names <- lapply(names, function(n) {
nn <- gsub(".", "_", n, fixed = TRUE)
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/WindowSpec.R
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ setMethod("show", "WindowSpec",
#' Defines the partitioning columns in a WindowSpec.
#'
#' @param x a WindowSpec.
#' @param col a column to partition on (desribed by the name or Column).
#' @param col a column to partition on (described by the name or Column).
#' @param ... additional column(s) to partition on.
#' @return A WindowSpec.
#' @rdname partitionBy
Expand Down Expand Up @@ -231,7 +231,7 @@ setMethod("rangeBetween",
#' @rdname over
#' @name over
#' @aliases over,Column,WindowSpec-method
#' @family colum_func
#' @family column_func
#' @examples
#' \dontrun{
#' df <- createDataFrame(mtcars)
Expand Down
16 changes: 8 additions & 8 deletions R/pkg/R/column.R
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ createMethods()
#' @rdname alias
#' @name alias
#' @aliases alias,Column-method
#' @family colum_func
#' @family column_func
#' @examples
#' \dontrun{
#' df <- createDataFrame(iris)
Expand All @@ -161,7 +161,7 @@ setMethod("alias",
#'
#' @rdname substr
#' @name substr
#' @family colum_func
#' @family column_func
#' @aliases substr,Column-method
#'
#' @param x a Column.
Expand All @@ -187,7 +187,7 @@ setMethod("substr", signature(x = "Column"),
#'
#' @rdname startsWith
#' @name startsWith
#' @family colum_func
#' @family column_func
#' @aliases startsWith,Column-method
#'
#' @param x vector of character string whose "starts" are considered
Expand All @@ -206,7 +206,7 @@ setMethod("startsWith", signature(x = "Column"),
#'
#' @rdname endsWith
#' @name endsWith
#' @family colum_func
#' @family column_func
#' @aliases endsWith,Column-method
#'
#' @param x vector of character string whose "ends" are considered
Expand All @@ -224,7 +224,7 @@ setMethod("endsWith", signature(x = "Column"),
#'
#' @rdname between
#' @name between
#' @family colum_func
#' @family column_func
#' @aliases between,Column-method
#'
#' @param x a Column
Expand All @@ -251,7 +251,7 @@ setMethod("between", signature(x = "Column"),
# nolint end
#' @rdname cast
#' @name cast
#' @family colum_func
#' @family column_func
#' @aliases cast,Column-method
#'
#' @examples
Expand Down Expand Up @@ -300,7 +300,7 @@ setMethod("%in%",
#' Can be a single value or a Column.
#' @rdname otherwise
#' @name otherwise
#' @family colum_func
#' @family column_func
#' @aliases otherwise,Column-method
#' @note otherwise since 1.5.0
setMethod("otherwise",
Expand Down Expand Up @@ -440,7 +440,7 @@ setMethod("withField",
#' )
#'
#' # However, if you are going to add/replace multiple nested fields,
#' # it is preffered to extract out the nested struct before
#' # it is preferred to extract out the nested struct before
#' # adding/replacing multiple fields e.g.
#' head(
#' withColumn(
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/context.R
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ makeSplits <- function(numSerializedSlices, length) {
# For instance, for numSerializedSlices of 22, length of 50
# [1] 0 0 2 2 4 4 6 6 6 9 9 11 11 13 13 15 15 15 18 18 20 20 22 22 22
# [26] 25 25 27 27 29 29 31 31 31 34 34 36 36 38 38 40 40 40 43 43 45 45 47 47 47
# Notice the slice group with 3 slices (ie. 6, 15, 22) are roughly evenly spaced.
# Notice the slice group with 3 slices (i.e. 6, 15, 22) are roughly evenly spaced.
# We are trying to reimplement the calculation in the positions method in ParallelCollectionRDD
if (numSerializedSlices > 0) {
unlist(lapply(0: (numSerializedSlices - 1), function(x) {
Expand Down Expand Up @@ -116,7 +116,7 @@ makeSplits <- function(numSerializedSlices, length) {
#' This change affects both createDataFrame and spark.lapply.
#' In the specific one case that it is used to convert R native object into SparkDataFrame, it has
#' always been kept at the default of 1. In the case the object is large, we are explicitly setting
#' the parallism to numSlices (which is still 1).
#' the parallelism to numSlices (which is still 1).
#'
#' Specifically, we are changing to split positions to match the calculation in positions() of
#' ParallelCollectionRDD in Spark.
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/deserialize.R
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ readDeserializeWithKeysInArrow <- function(inputCon) {

keys <- readMultipleObjects(inputCon)

# Read keys to map with each groupped batch later.
# Read keys to map with each grouped batch later.
list(keys = keys, data = data)
}

Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ NULL
#' @param y Column to compute on.
#' @param pos In \itemize{
#' \item \code{locate}: a start position of search.
#' \item \code{overlay}: a start postiton for replacement.
#' \item \code{overlay}: a start position for replacement.
#' }
#' @param len In \itemize{
#' \item \code{lpad} the maximum length of each output result.
Expand Down Expand Up @@ -2879,7 +2879,7 @@ setMethod("shiftRight", signature(y = "Column", x = "numeric"),
})

#' @details
#' \code{shiftRightUnsigned}: (Unigned) shifts the given value numBits right. If the given value is
#' \code{shiftRightUnsigned}: (Unsigned) shifts the given value numBits right. If the given value is
#' a long value, it will return a long value else it will return an integer value.
#'
#' @rdname column_math_functions
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/install.R
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ sparkCachePath <- function() {
}

# Length of the Spark cache specific relative path segments for each platform
# eg. "Apache\Spark\Cache" is 3 in Windows, or "spark" is 1 in unix
# e.g. "Apache\Spark\Cache" is 3 in Windows, or "spark" is 1 in unix
# Must match sparkCachePath() exactly.
sparkCacheRelPathLength <- function() {
if (is_windows()) {
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/mllib_fpm.R
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ setMethod("spark.freqItemsets", signature(object = "FPGrowthModel"),
#' The \code{SparkDataFrame} contains five columns:
#' \code{antecedent} (an array of the same type as the input column),
#' \code{consequent} (an array of the same type as the input column),
#' \code{condfidence} (confidence for the rule)
#' \code{confidence} (confidence for the rule)
#' \code{lift} (lift for the rule)
#' and \code{support} (support for the rule)
#' @rdname spark.fpGrowth
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/mllib_tree.R
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ setClass("DecisionTreeRegressionModel", representation(jobj = "jobj"))
#' @note DecisionTreeClassificationModel since 2.3.0
setClass("DecisionTreeClassificationModel", representation(jobj = "jobj"))

# Create the summary of a tree ensemble model (eg. Random Forest, GBT)
# Create the summary of a tree ensemble model (e.g. Random Forest, GBT)
summary.treeEnsemble <- function(model) {
jobj <- model@jobj
formula <- callJMethod(jobj, "formula")
Expand All @@ -73,7 +73,7 @@ summary.treeEnsemble <- function(model) {
jobj = jobj)
}

# Prints the summary of tree ensemble models (eg. Random Forest, GBT)
# Prints the summary of tree ensemble models (e.g. Random Forest, GBT)
print.summary.treeEnsemble <- function(x) {
jobj <- x$jobj
cat("Formula: ", x$formula)
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/mllib_utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
# mllib_utils.R: Utilities for MLlib integration

# Integration with R's standard functions.
# Most of MLlib's argorithms are provided in two flavours:
# Most of MLlib's algorithms are provided in two flavours:
# - a specialization of the default R methods (glm). These methods try to respect
# the inputs and the outputs of R's method to the largest extent, but some small differences
# may exist.
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/R/pairRDD.R
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ setMethod("partitionByRDD",
javaPairRDD <- callJMethod(javaPairRDD, "partitionBy", rPartitioner)

# Call .values() on the result to get back the final result, the
# shuffled acutal content key-val pairs.
# shuffled actual content key-val pairs.
r <- callJMethod(javaPairRDD, "values")

RDD(r, serializedMode = "byte")
Expand Down Expand Up @@ -411,7 +411,7 @@ setMethod("reduceByKeyLocally",
#' \itemize{
#' \item createCombiner, which turns a V into a C (e.g., creates a one-element list)
#' \item mergeValue, to merge a V into a C (e.g., adds it to the end of a list) -
#' \item mergeCombiners, to combine two C's into a single one (e.g., concatentates
#' \item mergeCombiners, to combine two C's into a single one (e.g., concatenates
#' two lists).
#' }
#'
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/streaming.R
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ setMethod("explain",

#' lastProgress
#'
#' Prints the most recent progess update of this streaming query in JSON format.
#' Prints the most recent progress update of this streaming query in JSON format.
#'
#' @param x a StreamingQuery.
#' @rdname lastProgress
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/types.R
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ rToSQLTypes <- as.environment(list(
"character" = "string",
"logical" = "boolean"))

# Helper function of coverting decimal type. When backend returns column type in the
# Helper function of converting decimal type. When backend returns column type in the
# format of decimal(,) (e.g., decimal(10, 0)), this function coverts the column type
# as double type. This function converts backend returned types that are not the key
# of PRIMITIVE_TYPES, but should be treated as PRIMITIVE_TYPES.
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -930,7 +930,7 @@ getOne <- function(x, envir, inherits = TRUE, ifnotfound = NULL) {
}

# Returns a vector of parent directories, traversing up count times, starting with a full path
# eg. traverseParentDirs("/Users/user/Library/Caches/spark/spark2.2", 1) should return
# e.g. traverseParentDirs("/Users/user/Library/Caches/spark/spark2.2", 1) should return
# this "/Users/user/Library/Caches/spark/spark2.2"
# and "/Users/user/Library/Caches/spark"
traverseParentDirs <- function(x, count) {
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/inst/worker/daemon.R
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ inputCon <- socketConnection(

SparkR:::doServerAuth(inputCon, Sys.getenv("SPARKR_WORKER_SECRET"))

# Waits indefinitely for a socket connecion by default.
# Waits indefinitely for a socket connection by default.
selectTimeout <- NULL

while (TRUE) {
Expand Down Expand Up @@ -72,7 +72,7 @@ while (TRUE) {
}
})
} else if (is.null(children)) {
# If it is NULL, there are no children. Waits indefinitely for a socket connecion.
# If it is NULL, there are no children. Waits indefinitely for a socket connection.
selectTimeout <- NULL
}

Expand Down
8 changes: 4 additions & 4 deletions R/pkg/inst/worker/worker.R
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ outputResult <- function(serializer, output, outputCon) {
}

# Constants
specialLengths <- list(END_OF_STERAM = 0L, TIMING_DATA = -1L)
specialLengths <- list(END_OF_STREAM = 0L, TIMING_DATA = -1L)

# Timing R process boot
bootTime <- currentTimeSecs()
Expand Down Expand Up @@ -180,7 +180,7 @@ if (isEmpty != 0) {
} else if (deserializer == "arrow" && mode == 1) {
data <- SparkR:::readDeserializeInArrow(inputCon)
# See https://stat.ethz.ch/pipermail/r-help/2010-September/252046.html
# rbind.fill might be an anternative to make it faster if plyr is installed.
# rbind.fill might be an alternative to make it faster if plyr is installed.
# Also, note that, 'dapply' applies a function to each partition.
data <- do.call("rbind", data)
}
Expand Down Expand Up @@ -212,7 +212,7 @@ if (isEmpty != 0) {

if (serializer == "arrow") {
# See https://stat.ethz.ch/pipermail/r-help/2010-September/252046.html
# rbind.fill might be an anternative to make it faster if plyr is installed.
# rbind.fill might be an alternative to make it faster if plyr is installed.
combined <- do.call("rbind", outputs)
SparkR:::writeSerializeInArrow(outputCon, combined)
}
Expand Down Expand Up @@ -285,7 +285,7 @@ SparkR:::writeDouble(outputCon, computeInputElapsDiff) # compute
SparkR:::writeDouble(outputCon, outputComputeElapsDiff) # output

# End of output
SparkR:::writeInt(outputCon, specialLengths$END_OF_STERAM)
SparkR:::writeInt(outputCon, specialLengths$END_OF_STREAM)

close(outputCon)
close(inputCon)
2 changes: 1 addition & 1 deletion R/pkg/tests/fulltests/test_Serde.R
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ test_that("SerDe of list of lists", {

sparkR.session.stop()

# Note that this test should be at the end of tests since the configruations used here are not
# Note that this test should be at the end of tests since the configurations used here are not
# specific to sessions, and the Spark context is restarted.
test_that("createDataFrame large objects", {
for (encryptionEnabled in list("true", "false")) {
Expand Down
6 changes: 3 additions & 3 deletions R/pkg/tests/fulltests/test_jvm_api.R
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ context("JVM API")
sparkSession <- sparkR.session(master = sparkRTestMaster, enableHiveSupport = FALSE)

test_that("Create and call methods on object", {
jarr <- sparkR.newJObject("java.util.ArrayList")
jarray <- sparkR.newJObject("java.util.ArrayList")
# Add an element to the array
sparkR.callJMethod(jarr, "add", 1L)
sparkR.callJMethod(jarray, "add", 1L)
# Check if get returns the same element
expect_equal(sparkR.callJMethod(jarr, "get", 0L), 1L)
expect_equal(sparkR.callJMethod(jarray, "get", 0L), 1L)
})

test_that("Call static methods", {
Expand Down
Loading