org.apache.mahout.drivers

TDIndexedDatasetReaderWriter

trait TDIndexedDatasetReaderWriter extends TDIndexedDatasetReader with TDIndexedDatasetWriter

A combined trait that reads and writes

Linear Supertypes
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. TDIndexedDatasetReaderWriter
  2. TDIndexedDatasetWriter
  3. Writer
  4. TDIndexedDatasetReader
  5. Reader
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract val mc: DistributedContext

    Definition Classes
    Writer
  2. abstract val readSchema: Schema

    Definition Classes
    Reader
  3. abstract val sort: Boolean

    Definition Classes
    Writer
  4. abstract val writeSchema: Schema

    Definition Classes
    Writer

Concrete Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def elementReader(mc: DistributedContext, readSchema: Schema, source: String, existingRowIDs: Option[BiDictionary] = None): IndexedDatasetSpark

    Read in text delimited elements from all URIs in the comma delimited source String and return the DRM of all elements updating the dictionaries for row and column dictionaries.

    Read in text delimited elements from all URIs in the comma delimited source String and return the DRM of all elements updating the dictionaries for row and column dictionaries. If there is no strength value in the element, assume it's presence means a strength of 1.

    mc

    context for the Spark job

    readSchema

    describes the delimiters and positions of values in the text delimited file.

    source

    comma delimited URIs of text files to be read from

    returns

    a new org.apache.mahout.sparkbindings.indexeddataset.IndexedDatasetSpark

    Attributes
    protected
    Definition Classes
    TDIndexedDatasetReader → Reader
  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  15. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  16. final def notify(): Unit

    Definition Classes
    AnyRef
  17. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  18. def readElementsFrom(source: String, existingRowIDs: Option[BiDictionary]): IndexedDatasetSpark

    Definition Classes
    Reader
  19. def readRowsFrom(source: String, existingRowIDs: Option[BiDictionary]): IndexedDatasetSpark

    Definition Classes
    Reader
  20. def rowReader(mc: DistributedContext, readSchema: Schema, source: String, existingRowIDs: Option[BiDictionary] = None): IndexedDatasetSpark

    Read in text delimited rows from all URIs in this comma delimited source String and return the DRM of all elements updating the dictionaries for row and column dictionaries.

    Read in text delimited rows from all URIs in this comma delimited source String and return the DRM of all elements updating the dictionaries for row and column dictionaries. If there is no strength value in the element, assume it's presence means a strength of 1.

    mc

    context for the Spark job

    readSchema

    describes the delimiters and positions of values in the text delimited file.

    source

    comma delimited URIs of text files to be read into the IndexedDatasetSpark

    returns

    a new org.apache.mahout.sparkbindings.indexeddataset.IndexedDatasetSpark

    Attributes
    protected
    Definition Classes
    TDIndexedDatasetReader → Reader
  21. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  22. def toString(): String

    Definition Classes
    AnyRef → Any
  23. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. def writeTo(collection: IndexedDatasetSpark, dest: String): Unit

    Definition Classes
    Writer
  27. def writer(mc: DistributedContext, writeSchema: Schema, dest: String, indexedDataset: IndexedDatasetSpark, sort: Boolean = true): Unit

    Read in text delimited elements from all URIs in this comma delimited source String.

    Read in text delimited elements from all URIs in this comma delimited source String.

    mc

    context for the Spark job

    writeSchema

    describes the delimiters and positions of values in the output text delimited file.

    dest

    directory to write text delimited version of IndexedDatasetSpark

    Attributes
    protected
    Definition Classes
    TDIndexedDatasetWriter → Writer

Inherited from TDIndexedDatasetWriter

Inherited from Writer[IndexedDatasetSpark]

Inherited from TDIndexedDatasetReader

Inherited from Reader[IndexedDatasetSpark]

Inherited from AnyRef

Inherited from Any

Ungrouped