Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame functions
#' @rdname nrow
#' @name count
#' @name nrow
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I don't think we should change @name - this should match the function name, "count"

Copy link
Contributor Author

@junyangq junyangq Aug 19, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the \name{count} would appear in both docs and the check complains...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe leave @name out then? That seems to fix the problem the other time?

#' @aliases count,SparkDataFrame-method
#' @export
#' @examples
Expand Down Expand Up @@ -1214,9 +1214,9 @@ setMethod("toRDD",
#'
#' Groups the SparkDataFrame using the specified columns, so we can run aggregation on them.
#'
#' @param x a SparkDataFrame
#' @param x a SparkDataFrame.
#' @param ... variable(s) (character names(s) or Column(s)) to group on.
#' @return a GroupedData
#' @return A GroupedData.
#' @family SparkDataFrame functions
#' @aliases groupBy,SparkDataFrame-method
#' @rdname groupBy
Expand Down Expand Up @@ -3037,8 +3037,8 @@ setMethod("str",
#' This is a no-op if schema doesn't contain column name(s).
#'
#' @param x a SparkDataFrame.
#' @param ... further arguments to be passed to or from other methods.
#' @param col a character vector of column names or a Column.
#' @param ... further arguments to be passed to or from other methods.
#' @return A SparkDataFrame.
#'
#' @family SparkDataFrame functions
Expand All @@ -3058,7 +3058,7 @@ setMethod("str",
#' @note drop since 2.0.0
setMethod("drop",
signature(x = "SparkDataFrame"),
function(x, col) {
function(x, col, ...) {
stopifnot(class(col) == "character" || class(col) == "Column")

if (class(col) == "Column") {
Expand Down Expand Up @@ -3218,11 +3218,11 @@ setMethod("histogram",
#' and to not change the existing data.
#' }
#'
#' @param x A SparkDataFrame
#' @param url JDBC database url of the form `jdbc:subprotocol:subname`
#' @param tableName The name of the table in the external database
#' @param x s SparkDataFrame.
#' @param url JDBC database url of the form `jdbc:subprotocol:subname`.
#' @param tableName yhe name of the table in the external database.
#' @param mode one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default).
#' @param ... additional JDBC database connection properties.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@param ... should be last

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Thanks!

#' @param mode One of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)
#' @family SparkDataFrame functions
#' @rdname write.jdbc
#' @name write.jdbc
Expand Down
2 changes: 1 addition & 1 deletion R/pkg/R/SQLContext.R
Original file line number Diff line number Diff line change
Expand Up @@ -730,7 +730,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
#' @param schema The data schema defined in structType
#' @param na.strings Default string value for NA when source is "csv"
#' @param ... additional external data source specific named propertie(s).
#' @param ... additional external data source specific named properties.
#' @return SparkDataFrame
#' @rdname read.df
#' @name read.df
Expand Down
6 changes: 3 additions & 3 deletions R/pkg/R/WindowSpec.R
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ setMethod("show", "WindowSpec",
#' Defines the partitioning columns in a WindowSpec.
#'
#' @param x a WindowSpec.
#' @param col a column to partition on (desribed by the name or Column object).
#' @param col a column to partition on (desribed by the name or Column).
#' @param ... additional column(s) to partition on.
#' @return A WindowSpec.
#' @rdname partitionBy
Expand Down Expand Up @@ -88,7 +88,7 @@ setMethod("partitionBy",
#'
#' Defines the ordering columns in a WindowSpec.
#' @param x a WindowSpec
#' @param col a character or Column object indicating an ordering column
#' @param col a character or Column indicating an ordering column
#' @param ... additional sorting fields
#' @return A WindowSpec.
#' @name orderBy
Expand Down Expand Up @@ -194,7 +194,7 @@ setMethod("rangeBetween",
#'
#' Define a windowing column.
#'
#' @param x a Column object, usually one returned by window function(s).
#' @param x a Column, usually one returned by window function(s).
#' @param window a WindowSpec object. Can be created by `windowPartitionBy` or
#' `windowOrderBy` and configured by other WindowSpec methods.
#' @rdname over
Expand Down
6 changes: 3 additions & 3 deletions R/pkg/R/column.R
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ setMethod("alias",
#' @family colum_func
#' @aliases substr,Column-method
#'
#' @param x a Column object.
#' @param x a Column.
#' @param start starting position.
#' @param stop ending position.
#' @note substr since 1.4.0
Expand Down Expand Up @@ -220,7 +220,7 @@ setMethod("endsWith", signature(x = "Column"),
#' @family colum_func
#' @aliases between,Column-method
#'
#' @param x a Column object
#' @param x a Column
#' @param bounds lower and upper bounds
#' @note between since 1.5.0
setMethod("between", signature(x = "Column"),
Expand All @@ -235,7 +235,7 @@ setMethod("between", signature(x = "Column"),

#' Casts the column to a different data type.
#'
#' @param x a Column object.
#' @param x a Column.
#' @param dataType a character object describing the target data type.
#' See
#' \href{https://spark.apache.org/docs/latest/sparkr.html#data-type-mapping-between-r-and-spark}{
Expand Down
21 changes: 10 additions & 11 deletions R/pkg/R/functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ setMethod("column",
#'
#' Computes the Pearson Correlation Coefficient for two Columns.
#'
#' @param col2 a (second) Column object.
#' @param col2 a (second) Column.
#'
#' @rdname corr
#' @name corr
Expand Down Expand Up @@ -357,8 +357,8 @@ setMethod("cov", signature(x = "characterOrColumn"),

#' @rdname cov
#'
#' @param col1 the first Column object.
#' @param col2 the second Column object.
#' @param col1 the first Column.
#' @param col2 the second Column.
#' @name covar_samp
#' @aliases covar_samp,characterOrColumn,characterOrColumn-method
#' @note covar_samp since 2.0.0
Expand Down Expand Up @@ -446,8 +446,8 @@ setMethod("cosh",
#'
#' Returns the number of items in a group. This is a column aggregate function.
#'
#' @rdname n
#' @name n
#' @rdname count
#' @name count
#' @family agg_funcs
#' @aliases count,Column-method
#' @export
Expand Down Expand Up @@ -1270,14 +1270,14 @@ setMethod("round",

#' bround
#'
#' Returns the value of the column `e` rounded to `scale` decimal places using HALF_EVEN rounding
#' mode if `scale` >= 0 or at integer part when `scale` < 0.
#' Returns the value of the column \code{e} rounded to \code{scale} decimal places using HALF_EVEN rounding
#' mode if \code{scale} >= 0 or at integer part when \code{scale} < 0.
#' Also known as Gaussian rounding or bankers' rounding that rounds to the nearest even number.
#' bround(2.5, 0) = 2, bround(3.5, 0) = 4.
#'
#' @param x Column to compute on.
#' @param scale round to \code{scale} digits to the right of the decimal point when \code{scale} > 0,
#' the nearest even number when \code{scale} = 0, and `scale` digits to the left
#' the nearest even number when \code{scale} = 0, and \code{scale} digits to the left
#' of the decimal point when \code{scale} < 0.
#' @param ... further arguments to be passed to or from other methods.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be
@param ... currently not used.

#' @rdname bround
Expand Down Expand Up @@ -2276,8 +2276,7 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
})

#' @param x a Column.
#' @rdname n
#' @rdname count
#' @name n
#' @aliases n,Column-method
#' @export
Expand Down Expand Up @@ -2655,7 +2654,7 @@ setMethod("expr", signature(x = "character"),
#' Formats the arguments in printf-style and returns the result as a string column.
#'
#' @param format a character object of format strings.
#' @param x a Column object.
#' @param x a Column.
#' @param ... additional Column(s).
#' @family string_funcs
#' @rdname format_string
Expand Down
9 changes: 7 additions & 2 deletions R/pkg/R/generics.R
Original file line number Diff line number Diff line change
Expand Up @@ -432,7 +432,8 @@ setGeneric("coltypes<-", function(x, value) { standardGeneric("coltypes<-") })
#' @export
setGeneric("columns", function(x) {standardGeneric("columns") })

#' @rdname nrow
#' @param x a GroupedData or Column.
#' @rdname count
#' @export
setGeneric("count", function(x) { standardGeneric("count") })

Expand Down Expand Up @@ -1071,7 +1072,7 @@ setGeneric("month", function(x) { standardGeneric("month") })
#' @export
setGeneric("months_between", function(y, x) { standardGeneric("months_between") })

#' @rdname n
#' @rdname count
#' @export
setGeneric("n", function(x) { standardGeneric("n") })

Expand Down Expand Up @@ -1303,6 +1304,10 @@ setGeneric("year", function(x) { standardGeneric("year") })
#' @export
setGeneric("spark.glm", function(data, formula, ...) { standardGeneric("spark.glm") })

#' @param x,y For \code{glm}: logical values indicating whether the response vector
#' and model matrix used in the fitting process should be returned as
#' components of the returned value.
#' @inheritParams stats::glm
#' @rdname glm
#' @export
setGeneric("glm")
Expand Down
1 change: 0 additions & 1 deletion R/pkg/R/group.R
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,6 @@ setMethod("show", "GroupedData",
#' Count the number of rows for each group.
#' The resulting SparkDataFrame will also contain the grouping columns.
#'
#' @param x a GroupedData.
#' @return A SparkDataFrame.
#' @rdname count
#' @aliases count,GroupedData-method
Expand Down
8 changes: 4 additions & 4 deletions R/pkg/R/mllib.R
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ setMethod("spark.glm", signature(data = "SparkDataFrame", formula = "formula"),
#' Fits a generalized linear model, similarly to R's glm().
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param data SparkDataFrame for training.
#' @param data a SparkDataFrame or R's glm data for training.
#' @param family a description of the error distribution and link function to be used in the model.
#' This can be a character string naming a family function, a family function or
#' the result of a call to a family function. Refer R family at
Expand Down Expand Up @@ -508,10 +508,10 @@ setMethod("summary", signature(object = "IsotonicRegressionModel"),
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' Note that the response variable of formula is empty in spark.kmeans.
#' @param ... additional argument(s) passed to the method.
#' @param k number of centers.
#' @param maxIter maximum iteration number.
#' @param initMode the initialization algorithm choosen to fit the model.
#' @param ... additional argument(s) passed to the method.
#' @return \code{spark.kmeans} returns a fitted k-means model.
#' @rdname spark.kmeans
#' @aliases spark.kmeans,SparkDataFrame,formula-method
Expand Down Expand Up @@ -628,8 +628,8 @@ setMethod("predict", signature(object = "KMeansModel"),
#' @param data a \code{SparkDataFrame} of observations and labels for model fitting.
#' @param formula a symbolic description of the model to be fitted. Currently only a few formula
#' operators are supported, including '~', '.', ':', '+', and '-'.
#' @param ... additional argument(s) passed to the method. Currently only \code{smoothing}.
#' @param smoothing smoothing parameter.
#' @param ... additional argument(s) passed to the method. Currently only \code{smoothing}.
#' @return \code{spark.naiveBayes} returns a fitted naive Bayes model.
#' @rdname spark.naiveBayes
#' @aliases spark.naiveBayes,SparkDataFrame,formula-method
Expand Down Expand Up @@ -657,7 +657,7 @@ setMethod("predict", signature(object = "KMeansModel"),
#' }
#' @note spark.naiveBayes since 2.0.0
setMethod("spark.naiveBayes", signature(data = "SparkDataFrame", formula = "formula"),
function(data, formula, smoothing = 1.0) {
function(data, formula, smoothing = 1.0, ...) {
formula <- paste(deparse(formula), collapse = "")
jobj <- callJStatic("org.apache.spark.ml.r.NaiveBayesWrapper", "fit",
formula, data@sdf, smoothing)
Expand Down