Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -92,9 +92,9 @@ public final class BytesToBytesMap {

/**
* The maximum number of keys that BytesToBytesMap supports. The hash table has to be
* power-of-2-sized and its backing Java array can contain at most (1 << 30) elements, since
* that's the largest power-of-2 that's less than Integer.MAX_VALUE. We need two long array
* entries per key, giving us a maximum capacity of (1 << 29).
* power-of-2-sized and its backing Java array can contain at most (1 &lt;&lt; 30) elements,
* since that's the largest power-of-2 that's less than Integer.MAX_VALUE. We need two long array
* entries per key, giving us a maximum capacity of (1 &lt;&lt; 29).
*/
@VisibleForTesting
static final int MAX_CAPACITY = (1 << 29);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ public String uid() {

/**
* Param for max number of iterations
* <p/>
* <p>
* NOTE: The usual way to add a parameter to a model or algorithm is to include:
* - val myParamName: ParamType
* - def getMyParamName
Expand Down Expand Up @@ -222,7 +222,7 @@ public Vector predictRaw(Vector features) {
/**
* Create a copy of the model.
* The copy is shallow, except for the embedded paramMap, which gets a deep copy.
* <p/>
* <p>
* This is used for the defaul implementation of [[transform()]].
*
* In Java, we have to make this method public since Java does not understand Scala's protected
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
* Usage: JavaStatefulNetworkWordCount <hostname> <port>
* <hostname> and <port> describe the TCP server that Spark Streaming would connect to receive
* data.
* <p/>
* <p>
* To run this on your local machine, you need to first run a Netcat server
* `$ nc -lk 9999`
* and then run the example
Expand Down
4 changes: 2 additions & 2 deletions launcher/src/main/java/org/apache/spark/launcher/Main.java
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ class Main {

/**
* Usage: Main [class] [class args]
* <p/>
* <p>
* This CLI works in two different modes:
* <ul>
* <li>"spark-submit": if <i>class</i> is "org.apache.spark.deploy.SparkSubmit", the
Expand All @@ -42,7 +42,7 @@ class Main {
*
* This class works in tandem with the "bin/spark-class" script on Unix-like systems, and
* "bin/spark-class2.cmd" batch script on Windows to execute the final command.
* <p/>
* <p>
* On Unix-like systems, the output is a list of command arguments, separated by the NULL
* character. On Windows, the output is a command line suitable for direct execution from the
* script.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@

/**
* Command builder for internal Spark classes.
* <p/>
* <p>
* This class handles building the command to launch all internal Spark classes except for
* SparkSubmit (which is handled by {@link SparkSubmitCommandBuilder} class.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ public SparkLauncher setMainClass(String mainClass) {
* Adds a no-value argument to the Spark invocation. If the argument is known, this method
* validates whether the argument is indeed a no-value argument, and throws an exception
* otherwise.
* <p/>
* <p>
* Use this method with caution. It is possible to create an invalid Spark command by passing
* unknown arguments to this method, since those are allowed for forward compatibility.
*
Expand All @@ -211,10 +211,10 @@ public SparkLauncher addSparkArg(String arg) {
* Adds an argument with a value to the Spark invocation. If the argument name corresponds to
* a known argument, the code validates that the argument actually expects a value, and throws
* an exception otherwise.
* <p/>
* <p>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this isn't more valid as HTML. Was this just an extraneous change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it turns out that javadoc in java 8 doesn't allow self-closing elements (<br/> and <p/>) any more:
http://stackoverflow.com/questions/26049329/javadoc-in-jdk-8-invalid-self-closing-element-not-allowed
http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html#format

<p> is the preferred seperator for a paragraph. So its is not HTML but Javadoc we are talking about. Sorry about the confusion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm I though that's because it now required valid HTML, like <p>...</p>. Well, if it fixes an error, obviously that's better than an error. There seem to be ~17 occurrences of this in the code base though -- worth fixing it in one go?

* It is safe to add arguments modified by other methods in this class (such as
* {@link #setMaster(String)} - the last invocation will be the one to take effect.
* <p/>
* <p>
* Use this method with caution. It is possible to create an invalid Spark command by passing
* unknown arguments to this method, since those are allowed for forward compatibility.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@

/**
* Special command builder for handling a CLI invocation of SparkSubmit.
* <p/>
* <p>
* This builder adds command line parsing compatible with SparkSubmit. It handles setting
* driver-side options and special parsing behavior needed for the special-casing certain internal
* Spark applications.
* <p/>
* <p>
* This class has also some special features to aid launching pyspark.
*/
class SparkSubmitCommandBuilder extends AbstractCommandBuilder {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

/**
* Parser for spark-submit command line options.
* <p/>
* <p>
* This class encapsulates the parsing code for spark-submit command line options, so that there
* is a single list of options that needs to be maintained (well, sort of, but it makes it harder
* to break things).
Expand Down Expand Up @@ -80,10 +80,10 @@ class SparkSubmitOptionParser {
* This is the canonical list of spark-submit options. Each entry in the array contains the
* different aliases for the same option; the first element of each entry is the "official"
* name of the option, passed to {@link #handle(String, String)}.
* <p/>
* <p>
* Options not listed here nor in the "switch" list below will result in a call to
* {@link $#handleUnknown(String)}.
* <p/>
* <p>
* These two arrays are visible for tests.
*/
final String[][] opts = {
Expand Down Expand Up @@ -130,7 +130,7 @@ class SparkSubmitOptionParser {

/**
* Parse a list of spark-submit command line options.
* <p/>
* <p>
* See SparkSubmitArguments.scala for a more formal description of available options.
*
* @throws IllegalArgumentException If an error is found during parsing.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ public class TaskMemoryManager {

/**
* Maximum supported data page size (in bytes). In principle, the maximum addressable page size is
* (1L << OFFSET_BITS) bytes, which is 2+ petabytes. However, the on-heap allocator's maximum page
* (1L &lt;&lt; OFFSET_BITS) bytes, which is 2+ petabytes. However, the on-heap allocator's maximum page
* size is limited by the maximum amount of data that can be stored in a long[] array, which is
* (2^32 - 1) * 8 bytes (or 16 gigabytes). Therefore, we cap this at 16 gigabytes.
*/
Expand Down