Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
a109a4b
This commit introduces a new tuning config called 'maxBytesInMemory' …
Apr 5, 2018
b31d634
Fix check style and remove a comment
Apr 5, 2018
ac401c5
Add overlord unsecured paths to coordinator when using combined servi…
jon-wei Apr 5, 2018
9e786e8
More error reporting and stats for ingestion tasks (#5418)
jon-wei Apr 6, 2018
7f4188f
Allow getDomain to return disjointed intervals (#5570)
niketh Apr 6, 2018
83afb73
Adding feature thetaSketchConstant to do some set operation in PostAg…
lssenthilkumar Apr 6, 2018
10dc150
Fix taskDuration docs for KafkaIndexingService (#5572)
dylwylie Apr 6, 2018
ea6b347
Add doc for automatic pendingSegments (#5565)
jihoonson Apr 6, 2018
2bbc6d6
Fix indexTask to respect forceExtendableShardSpecs (#5509)
jihoonson Apr 6, 2018
e9906e8
Deprecate spark2 profile in pom.xml (#5581)
drcrallen Apr 6, 2018
99315da
CompressionUtils: Add support for decompressing xz, bz2, zip. (#5586)
gianm Apr 6, 2018
1d8d14e
This commit introduces a new tuning config called 'maxBytesInMemory' …
Apr 5, 2018
8c85a65
Address code review comments
Apr 6, 2018
55a3d2b
Address more code review comments
Apr 6, 2018
c40678b
Fix some style checks
Apr 6, 2018
1a49eda
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 9, 2018
9f87c2f
Merge conflicts
Apr 9, 2018
c45bf3b
Fix failing tests
Apr 9, 2018
5363f0f
Address PR comments
Apr 13, 2018
94cd7db
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 13, 2018
813a261
Fix TeamCity inspection warnings
Apr 13, 2018
ec24d3a
Added maxBytesInMemory config to HadoopTuningConfig
Apr 13, 2018
acb4020
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 13, 2018
be9a0c1
Updated the docs and examples
Apr 16, 2018
1cc9194
Set maxBytesInMemory to 0 until used
Apr 23, 2018
0aa029a
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 23, 2018
7822721
Update toString in KafkaSupervisorTuningConfig
Apr 23, 2018
a1416ab
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 23, 2018
dcea72b
Use correct maxBytesInMemory value in AppenderatorImpl
Apr 24, 2018
21c3a21
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 24, 2018
98ec694
Update DEFAULT_MAX_BYTES_IN_MEMORY to 1/6 max jvm memory
Apr 26, 2018
7b11f21
Update docs to correct maxBytesInMemory default value
Apr 26, 2018
49c4929
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 30, 2018
27f98b5
Minor to rename and add comment
Apr 30, 2018
9b8b39f
Add more details in docs
Apr 30, 2018
1d358d7
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
Apr 30, 2018
28adb60
Address new PR comments
May 2, 2018
dd071cf
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
May 2, 2018
5288da3
Address PR comments
May 2, 2018
82fd254
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
May 2, 2018
c405b53
Fix spelling typo
May 3, 2018
62d20f7
Merge branch 'master' of github.com:druid-io/druid into feature-allow…
May 3, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/content/development/extensions-core/kafka-ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,8 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|Field|Type|Description|Required|
|-----|----|-----------|--------|
|`type`|String|The indexing task type, this should always be `kafka`.|yes|
|`maxRowsInMemory`|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 75000)|
|`maxRowsInMemory`|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists). Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.|no (default == 1000000)|
|`maxBytesInMemory`|Long|The number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. Normally this is computed internally and user does not need to set it. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists). |no (default == One-sixth of max JVM memory)|
|`maxRowsPerSegment`|Integer|The number of rows to aggregate into a segment; this number is post-aggregation rows. Handoff will happen either if `maxRowsPerSegment` is hit or every `intermediateHandoffPeriod`, whichever happens earlier.|no (default == 5000000)|
|`intermediatePersistPeriod`|ISO8601 Period|The period that determines the rate at which intermediate persists occur.|no (default == PT10M)|
|`maxPendingPersists`|Integer|Maximum number of persists that can be pending but not started. If this limit would be exceeded by a new intermediate persist, ingestion will block until the currently-running persist finishes. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 0, meaning one persist can be running concurrently with ingestion, and none can be queued up)|
Expand Down
3 changes: 2 additions & 1 deletion docs/content/ingestion/batch-ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,8 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|workingPath|String|The working path to use for intermediate results (results between Hadoop jobs).|no (default == '/tmp/druid-indexing')|
|version|String|The version of created segments. Ignored for HadoopIndexTask unless useExplicitVersion is set to true|no (default == datetime that indexing starts at)|
|partitionsSpec|Object|A specification of how to partition each time bucket into segments. Absence of this property means no partitioning will occur. See 'Partitioning specification' below.|no (default == 'hashed')|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. Note that this is the number of post-aggregation rows which may not be equal to the number of input events due to roll-up. This is used to manage the required JVM heap size.|no (default == 75000)|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. Note that this is the number of post-aggregation rows which may not be equal to the number of input events due to roll-up. This is used to manage the required JVM heap size. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.|no (default == 1000000)|
|maxBytesInMemory|Long|The number of bytes to aggregate in heap memory before persisting. Normally this is computed internally and user does not need to set it. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists).|no (default == One-sixth of max JVM memory)|
|leaveIntermediate|Boolean|Leave behind intermediate files (for debugging) in the workingPath when a job completes, whether it passes or fails.|no (default == false)|
|cleanupOnFailure|Boolean|Clean up intermediate files when a job fails (unless leaveIntermediate is on).|no (default == true)|
|overwriteFiles|Boolean|Override existing files found during indexing.|no (default == false)|
Expand Down
8 changes: 5 additions & 3 deletions docs/content/ingestion/stream-pull.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The property `druid.realtime.specFile` has the path of a file (absolute or relat
},
"tuningConfig": {
"type" : "realtime",
"maxRowsInMemory": 75000,
"maxRowsInMemory": 1000000,
"intermediatePersistPeriod": "PT10m",
"windowPeriod": "PT10m",
"basePersistDirectory": "\/tmp\/realtime\/basePersist",
Expand Down Expand Up @@ -141,7 +141,8 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|Field|Type|Description|Required|
|-----|----|-----------|--------|
|type|String|This should always be 'realtime'.|no|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 75000)|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 1000000)|
|maxBytesInMemory|Long|The maximum number of bytes to keep in memory to aggregate before persisting. This is used to manage the required JVM heap size.|no (default == One-sixth of max JVM memory)|
|windowPeriod|ISO 8601 Period String|The amount of lag time to allow events. This is configured with a 10 minute window, meaning that any event more than 10 minutes ago will be thrown away and not included in the segment generated by the realtime server.|no (default == PT10m)|
|intermediatePersistPeriod|ISO8601 Period String|The period that determines the rate at which intermediate persists occur. These persists determine how often commits happen against the incoming realtime stream. If the realtime data loading process is interrupted at time T, it should be restarted to re-read data that arrived at T minus this period.|no (default == PT10m)|
|basePersistDirectory|String|The directory to put things that need persistence. The plumber is responsible for the actual intermediate persists and this tells it where to store those persists.|no (default == java tmp dir)|
Expand Down Expand Up @@ -287,7 +288,8 @@ The following table summarizes constraints between settings in the spec file for
|segmentGranularity| Time granularity (minute, hour, day, week, month) for loading data at query time | equal to indexGranularity| more than queryGranularity|
|queryGranularity| Time granularity (minute, hour, day, week, month) for rollup | less than segmentGranularity| minute, hour, day, week, month |
|intermediatePersistPeriod| The max time (ISO8601 Period) between flushes of ingested rows from memory to disk | avoid excessive flushing | number of un-persisted rows in memory also constrained by maxRowsInMemory |
|maxRowsInMemory| The max number of ingested rows to hold in memory before a flush to disk | number of un-persisted post-aggregation rows in memory is also constrained by intermediatePersistPeriod | use this to avoid running out of heap if too many rows in an intermediatePersistPeriod |
|maxRowsInMemory| The max number of ingested rows to hold in memory before a flush to disk. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set| number of un-persisted post-aggregation rows in memory is also constrained by intermediatePersistPeriod | use this to avoid running out of heap if too many rows in an intermediatePersistPeriod |
|maxBytesInMemory| The number of bytes to keep in memory before a flush to disk. Normally this is computed internally and user does not need to set it. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists)| number of un-persisted post-aggregation bytes in memory is also constrained by intermediatePersistPeriod | use this to avoid running out of heap if too many rows in an intermediatePersistPeriod |

The normal, expected use cases have the following overall constraints: `intermediatePersistPeriod ≤ windowPeriod < segmentGranularity` and `queryGranularity ≤ segmentGranularity`

Expand Down
5 changes: 3 additions & 2 deletions docs/content/ingestion/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ The Index Task is a simpler variation of the Index Hadoop task that is designed
"tuningConfig" : {
"type" : "index",
"targetPartitionSize" : 5000000,
"maxRowsInMemory" : 75000
"maxRowsInMemory" : 1000000
}
}
}
Expand Down Expand Up @@ -137,7 +137,8 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|--------|-----------|-------|---------|
|type|The task type, this should always be "index".|none|yes|
|targetPartitionSize|Used in sharding. Determines how many rows are in each segment.|5000000|no|
|maxRowsInMemory|Used in determining when intermediate persists to disk should occur.|75000|no|
|maxRowsInMemory|Used in determining when intermediate persists to disk should occur. Normally user does not need to set this, but depending on the nature of data, if rows are short in terms of bytes, user may not want to store a million rows in memory and this value should be set.|1000000|no|
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks nice. Thanks!

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add this and the below description on maxBytesInMemory to all other places as well.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay

|maxBytesInMemory|Used in determining when intermediate persists to disk should occur. Normally this is computed internally and user does not need to set it. This value represents number of bytes to aggregate in heap memory before persisting. This is based on a rough estimate of memory usage and not actual usage. The maximum heap memory usage for indexing is maxBytesInMemory * (2 + maxPendingPersists)|1/6 of max JVM memory|no|
|maxTotalRows|Total number of rows in segments waiting for being published. Used in determining when intermediate publish should occur.|150000|no|
|numShards|Directly specify the number of shards to create. If this is specified and 'intervals' is specified in the granularitySpec, the index task can skip the determine intervals/partitions pass through the data. numShards cannot be specified if targetPartitionSize is set.|null|no|
|indexSpec|defines segment storage format options to be used at indexing time, see [IndexSpec](#indexspec)|null|no|
Expand Down
1 change: 0 additions & 1 deletion examples/conf-quickstart/tranquility/kafka.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
Expand Down
1 change: 0 additions & 1 deletion examples/conf-quickstart/tranquility/server.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
Expand Down
1 change: 0 additions & 1 deletion examples/conf/tranquility/kafka.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
Expand Down
1 change: 0 additions & 1 deletion examples/conf/tranquility/server.json
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,6 @@
},
"tuningConfig" : {
"type" : "realtime",
"maxRowsInMemory" : "100000",
"intermediatePersistPeriod" : "PT10M",
"windowPeriod" : "PT10M"
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ public void setUp() throws Exception
null,
null,
null,
null,
false,
false,
false,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ public class KafkaTuningConfig implements TuningConfig, AppenderatorConfig
private static final boolean DEFAULT_RESET_OFFSET_AUTOMATICALLY = false;

private final int maxRowsInMemory;
private final long maxBytesInMemory;
private final int maxRowsPerSegment;
private final Period intermediatePersistPeriod;
private final File basePersistDirectory;
Expand All @@ -58,6 +59,7 @@ public class KafkaTuningConfig implements TuningConfig, AppenderatorConfig
@JsonCreator
public KafkaTuningConfig(
@JsonProperty("maxRowsInMemory") @Nullable Integer maxRowsInMemory,
@JsonProperty("maxBytesInMemory") @Nullable Long maxBytesInMemory,
@JsonProperty("maxRowsPerSegment") @Nullable Integer maxRowsPerSegment,
@JsonProperty("intermediatePersistPeriod") @Nullable Period intermediatePersistPeriod,
@JsonProperty("basePersistDirectory") @Nullable File basePersistDirectory,
Expand All @@ -80,6 +82,9 @@ public KafkaTuningConfig(

this.maxRowsInMemory = maxRowsInMemory == null ? defaults.getMaxRowsInMemory() : maxRowsInMemory;
this.maxRowsPerSegment = maxRowsPerSegment == null ? DEFAULT_MAX_ROWS_PER_SEGMENT : maxRowsPerSegment;
// initializing this to 0, it will be lazily initialized to a value
// @see server.src.main.java.io.druid.segment.indexing.TuningConfigs#getMaxBytesInMemoryOrDefault(long)
this.maxBytesInMemory = maxBytesInMemory == null ? 0 : maxBytesInMemory;
this.intermediatePersistPeriod = intermediatePersistPeriod == null
? defaults.getIntermediatePersistPeriod()
: intermediatePersistPeriod;
Expand Down Expand Up @@ -116,6 +121,7 @@ public static KafkaTuningConfig copyOf(KafkaTuningConfig config)
{
return new KafkaTuningConfig(
config.maxRowsInMemory,
config.maxBytesInMemory,
config.maxRowsPerSegment,
config.intermediatePersistPeriod,
config.basePersistDirectory,
Expand All @@ -140,6 +146,13 @@ public int getMaxRowsInMemory()
return maxRowsInMemory;
}

@Override
@JsonProperty
public long getMaxBytesInMemory()
{
return maxBytesInMemory;
}

@JsonProperty
public int getMaxRowsPerSegment()
{
Expand Down Expand Up @@ -240,6 +253,7 @@ public KafkaTuningConfig withBasePersistDirectory(File dir)
{
return new KafkaTuningConfig(
maxRowsInMemory,
maxBytesInMemory,
maxRowsPerSegment,
intermediatePersistPeriod,
dir,
Expand Down Expand Up @@ -269,6 +283,7 @@ public boolean equals(Object o)
KafkaTuningConfig that = (KafkaTuningConfig) o;
return maxRowsInMemory == that.maxRowsInMemory &&
maxRowsPerSegment == that.maxRowsPerSegment &&
maxBytesInMemory == that.maxBytesInMemory &&
maxPendingPersists == that.maxPendingPersists &&
reportParseExceptions == that.reportParseExceptions &&
handoffConditionTimeout == that.handoffConditionTimeout &&
Expand All @@ -289,6 +304,7 @@ public int hashCode()
return Objects.hash(
maxRowsInMemory,
maxRowsPerSegment,
maxBytesInMemory,
intermediatePersistPeriod,
basePersistDirectory,
maxPendingPersists,
Expand All @@ -310,6 +326,7 @@ public String toString()
return "KafkaTuningConfig{" +
"maxRowsInMemory=" + maxRowsInMemory +
", maxRowsPerSegment=" + maxRowsPerSegment +
", maxBytesInMemory=" + maxBytesInMemory +
", intermediatePersistPeriod=" + intermediatePersistPeriod +
", basePersistDirectory=" + basePersistDirectory +
", maxPendingPersists=" + maxPendingPersists +
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ public KafkaSupervisorSpec(
null,
null,
null,
null,
null
);
this.ioConfig = Preconditions.checkNotNull(ioConfig, "ioConfig");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@

import com.fasterxml.jackson.annotation.JsonProperty;
import io.druid.indexing.kafka.KafkaTuningConfig;
import io.druid.segment.indexing.TuningConfigs;
import io.druid.segment.writeout.SegmentWriteOutMediumFactory;
import io.druid.segment.IndexSpec;
import org.joda.time.Duration;
Expand All @@ -40,6 +41,7 @@ public class KafkaSupervisorTuningConfig extends KafkaTuningConfig

public KafkaSupervisorTuningConfig(
@JsonProperty("maxRowsInMemory") Integer maxRowsInMemory,
@JsonProperty("maxBytesInMemory") Long maxBytesInMemory,
@JsonProperty("maxRowsPerSegment") Integer maxRowsPerSegment,
@JsonProperty("intermediatePersistPeriod") Period intermediatePersistPeriod,
@JsonProperty("basePersistDirectory") File basePersistDirectory,
Expand All @@ -65,6 +67,7 @@ public KafkaSupervisorTuningConfig(
{
super(
maxRowsInMemory,
maxBytesInMemory,
maxRowsPerSegment,
intermediatePersistPeriod,
basePersistDirectory,
Expand Down Expand Up @@ -131,6 +134,7 @@ public String toString()
return "KafkaSupervisorTuningConfig{" +
"maxRowsInMemory=" + getMaxRowsInMemory() +
", maxRowsPerSegment=" + getMaxRowsPerSegment() +
", maxBytesInMemory=" + TuningConfigs.getMaxBytesInMemoryOrDefault(getMaxBytesInMemory()) +
", intermediatePersistPeriod=" + getIntermediatePersistPeriod() +
", basePersistDirectory=" + getBasePersistDirectory() +
", maxPendingPersists=" + getMaxPendingPersists() +
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1965,6 +1965,7 @@ private KafkaIndexTask createTask(
{
final KafkaTuningConfig tuningConfig = new KafkaTuningConfig(
1000,
null,
maxRowsPerSegment,
new Period("P1Y"),
null,
Expand Down Expand Up @@ -2007,6 +2008,7 @@ private KafkaIndexTask createTask(
{
final KafkaTuningConfig tuningConfig = new KafkaTuningConfig(
1000,
null,
maxRowsPerSegment,
new Period("P1Y"),
null,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ public void testSerdeWithDefaults() throws Exception
);

Assert.assertNotNull(config.getBasePersistDirectory());
Assert.assertEquals(75000, config.getMaxRowsInMemory());
Assert.assertEquals(1000000, config.getMaxRowsInMemory());
Assert.assertEquals(5_000_000, config.getMaxRowsPerSegment());
Assert.assertEquals(new Period("PT10M"), config.getIntermediatePersistPeriod());
Assert.assertEquals(0, config.getMaxPendingPersists());
Expand Down Expand Up @@ -103,6 +103,7 @@ public void testCopyOf()
{
KafkaTuningConfig original = new KafkaTuningConfig(
1,
null,
2,
new Period("PT3S"),
new File("/tmp/xxx"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,7 @@ public void setupTest()

tuningConfig = new KafkaSupervisorTuningConfig(
1000,
null,
50000,
new Period("P1Y"),
new File("/test"),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ public void testSerdeWithDefaults() throws Exception
);

Assert.assertNotNull(config.getBasePersistDirectory());
Assert.assertEquals(75000, config.getMaxRowsInMemory());
Assert.assertEquals(1000000, config.getMaxRowsInMemory());
Assert.assertEquals(5_000_000, config.getMaxRowsPerSegment());
Assert.assertEquals(new Period("PT10M"), config.getIntermediatePersistPeriod());
Assert.assertEquals(0, config.getMaxPendingPersists());
Expand Down
Loading