public class HadoopDataSourceProfile extends Object
HadoopDataSource
.Modifier and Type | Field and Description |
---|---|
static String |
KEY_COMBINE_BLOCKS
The property key name for
isCombineBlocks() . |
static String |
KEY_KEEPALIVE_INTERVAL
The property key name for
getKeepAliveInterval() . |
static String |
KEY_LEGACY_FRAGMENT_MIN
the Hadoop configuration key of whether or not use the minimum value between
getMinimumFragmentSize() and FragmentableDataFormat.getMinimumFragmentSize() . |
static String |
KEY_MIN_FRAGMENT
The property key name for
getMinimumFragmentSize(FragmentableDataFormat) . |
static String |
KEY_OUTPUT_STAGING
The property key name for
isOutputStaging() . |
static String |
KEY_OUTPUT_STREAMING
The property key name for
isOutputStreaming() . |
static String |
KEY_PATH
The property key name for
getFileSystemPath() . |
static String |
KEY_PREF_FRAGMENT
The property key name for
getPreferredFragmentSize(FragmentableDataFormat) . |
static String |
KEY_ROLLFORWARD_THREADS
The property key name of number of threads for moving files in roll-forward operation.
|
static String |
KEY_SPLIT_BLOCKS
The property key name for
isSplitBlocks() . |
static String |
KEY_TEMP
The property key name for
getTemporaryFileSystemPath() . |
Constructor and Description |
---|
HadoopDataSourceProfile(org.apache.hadoop.conf.Configuration conf,
String id,
String contextPath,
org.apache.hadoop.fs.Path fileSystemPath,
org.apache.hadoop.fs.Path temporaryPath)
Creates a new instance.
|
Modifier and Type | Method and Description |
---|---|
static HadoopDataSourceProfile |
convert(DirectDataSourceProfile profile,
org.apache.hadoop.conf.Configuration conf)
Converts the
DirectDataSourceProfile into this profile. |
String |
getContextPath()
Returns the logical context path.
|
org.apache.hadoop.fs.FileSystem |
getFileSystem()
Returns the file system for the this datastore.
|
org.apache.hadoop.fs.Path |
getFileSystemPath()
Returns the mapping target path.
|
String |
getId()
Return the ID of this datasource.
|
long |
getKeepAliveInterval()
Returns keep-alive interval.
|
org.apache.hadoop.fs.LocalFileSystem |
getLocalFileSystem()
Returns the local file system for the this datastore.
|
long |
getMinimumFragmentSize()
Returns the minimum fragment size.
|
long |
getMinimumFragmentSize(FragmentableDataFormat<?> format)
Returns the minimum fragment size.
|
long |
getPreferredFragmentSize()
Returns the preferred fragment size.
|
long |
getPreferredFragmentSize(FragmentableDataFormat<?> format)
Returns the preferred fragment size.
|
int |
getRollforwardThreads()
Returns the number of threads to move staged files to committed area.
|
org.apache.hadoop.fs.Path |
getTemporaryFileSystemPath()
Returns the temporary root path.
|
boolean |
isCombineBlocks()
Returns whether combines multiple blocks into a fragment for optimization.
|
boolean |
isOutputStaging()
Returns whether output staging is required.
|
boolean |
isOutputStreaming()
Returns whether output streaming is required.
|
boolean |
isSplitBlocks()
Returns whether split DFS block into multiple splits for optimization.
|
void |
setCombineBlocks(boolean combine)
Sets whether combines blocks for optimization.
|
void |
setKeepAliveInterval(long interval)
Sets keep-alive interval.
|
void |
setMinimumFragmentSize(long size)
Configures the minimum fragment size in bytes.
|
void |
setOutputStaging(boolean required)
Sets whether output staging is required.
|
void |
setOutputStreaming(boolean required)
Sets whether output streaming is required.
|
void |
setPreferredFragmentSize(long size)
Configures the preferred fragment size in bytes.
|
void |
setRollforwardThreads(int threads)
Sets the number of threads to move staged files to committed area.
|
void |
setSplitBlocks(boolean split)
Sets whether splits blocks for optimization.
|
String |
toString() |
public static final String KEY_PATH
getFileSystemPath()
.
Default is FileSystem.getWorkingDirectory()
.public static final String KEY_TEMP
getTemporaryFileSystemPath()
.public static final String KEY_OUTPUT_STAGING
isOutputStaging()
.public static final String KEY_OUTPUT_STREAMING
isOutputStreaming()
.public static final String KEY_MIN_FRAGMENT
getMinimumFragmentSize(FragmentableDataFormat)
.public static final String KEY_PREF_FRAGMENT
getPreferredFragmentSize(FragmentableDataFormat)
.public static final String KEY_SPLIT_BLOCKS
isSplitBlocks()
.public static final String KEY_COMBINE_BLOCKS
isCombineBlocks()
.public static final String KEY_KEEPALIVE_INTERVAL
getKeepAliveInterval()
.public static final String KEY_ROLLFORWARD_THREADS
public static final String KEY_LEGACY_FRAGMENT_MIN
getMinimumFragmentSize()
and FragmentableDataFormat.getMinimumFragmentSize()
.
After the issue was fixed,
we use the maximum value of them.public HadoopDataSourceProfile(org.apache.hadoop.conf.Configuration conf, String id, String contextPath, org.apache.hadoop.fs.Path fileSystemPath, org.apache.hadoop.fs.Path temporaryPath) throws IOException
conf
- the current configurationid
- the ID of this datasourcecontextPath
- the logical context pathfileSystemPath
- the mapping target pathtemporaryPath
- the temporary root pathIOException
- if failed to create profileIllegalArgumentException
- if some parameters were null
public String getId()
public String getContextPath()
public org.apache.hadoop.fs.Path getFileSystemPath()
public org.apache.hadoop.fs.Path getTemporaryFileSystemPath()
public org.apache.hadoop.fs.FileSystem getFileSystem()
public org.apache.hadoop.fs.LocalFileSystem getLocalFileSystem()
public long getMinimumFragmentSize(FragmentableDataFormat<?> format) throws IOException, InterruptedException
format
- target format< 0
if fragmentation is restrictedIOException
- if failed to compute size by I/O errorInterruptedException
- if interruptedIllegalArgumentException
- if some parameters were null
public long getMinimumFragmentSize()
< 0
if fragmentation is restrictedpublic void setMinimumFragmentSize(long size)
size
- the size, or <= 0
to restrict fragmentationpublic long getPreferredFragmentSize(FragmentableDataFormat<?> format) throws IOException, InterruptedException
format
- target formatIOException
- if failed to compute size by I/O errorInterruptedException
- if interruptedIllegalArgumentException
- if some parameters were null
public long getPreferredFragmentSize()
public void setPreferredFragmentSize(long size)
size
- the sizepublic boolean isSplitBlocks()
true
to split, otherwise false
public void setSplitBlocks(boolean split)
split
- true
to split, otherwise false
public boolean isCombineBlocks()
true
to combine, otherwise false
public void setCombineBlocks(boolean combine)
combine
- true
to combine, otherwise false
public boolean isOutputStaging()
true
to required, otherwise false
.public void setOutputStaging(boolean required)
required
- true
to required, otherwise false
public boolean isOutputStreaming()
true
to required, otherwise false
.public void setOutputStreaming(boolean required)
required
- true
to required, otherwise false
public long getKeepAliveInterval()
0
if keep-alive is disabledpublic void setKeepAliveInterval(long interval)
interval
- keep-alive interval in ms, or 0
to disable keep-alivepublic int getRollforwardThreads()
public void setRollforwardThreads(int threads)
threads
- the number of threadspublic static HadoopDataSourceProfile convert(DirectDataSourceProfile profile, org.apache.hadoop.conf.Configuration conf) throws IOException
DirectDataSourceProfile
into this profile.profile
- target profileconf
- Hadoop configurationIOException
- if failed to convertIllegalArgumentException
- if some parameters were null
Copyright © 2011–2019 Asakusa Framework Team. All rights reserved.