Different projects require diverse computational requirements. AryaXAI enables users to tailor server resources to match specific project demands. This customization enhances performance, scalability, and resource utilization.
Serverless ML and Dedicated Server Options
AryaXAI offers two primary types of computational resources: Serverless ML and Dedicated Servers. This flexibility enables users to optimize performance by selecting the most appropriate server size for their tasks. Users can also scale resources up or down as project needs evolve, ensuring optimal resource usage.
Serverless ML Options:
- Available sizes: Small, Medium, Large, X-Large, 2X-Large
- Applicable to all tasks except synthetic-related tasks.
- Credits are deducted based on runtime and selected specifications.
Serverless ML options ram size and CPU details:
- Small: 1 vCPU, 8 GB memory
- Medium: 2 vCPUs, 16 GB memory
- Large: 4 vCPUs, 32 GB memory
- Xlarge: 8 vCPUs, 60 GB memory
- 2Xlarge: 16 vCPUs, 98 GB memory
Dedicated CPU Options:
- Available instances: t3.medium, t3.large, t3.xlarge, t3.2xlarge, m4.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, m4.16xlarge, c4.large, c4.xlarge, c4.2xlarge, c4.4xlarge, c4.8xlarge, c5.9xlarge, c5.12xlarge, c5.18xlarge, c5.24xlarge
- Available for all tasks except synthetic-related tasks.
- Credits are deducted based on the total time the instance is running.
Below are the instance details for dedicated CPU Options including vCPUs, RAM, and architecture type:
- t3.medium: 2 vCPUs, 4 GiB RAM, architecture: x86_64
- t3.large: 2 vCPUs, 8 GiB RAM, architecture: x86_64
- t3.xlarge: 4 vCPUs, 16 GiB RAM, architecture: x86_64
- t3.2xlarge: 8 vCPUs, 32 GiB RAM, architecture: x86_64
- m4.large: 2 vCPUs, 8 GiB RAM, architecture: x86_64
- m4.xlarge: 4 vCPUs, 16 GiB RAM, architecture: x86_64
- m4.2xlarge: 8 vCPUs, 32 GiB RAM, architecture: x86_64
- m4.4xlarge: 16 vCPUs, 64 GiB RAM, architecture: x86_64
- m4.10xlarge: 40 vCPUs, 160 GiB RAM, architecture: x86_64
- m4.16xlarge: 64 vCPUs, 256 GiB RAM, architecture: x86_64
- c4.large: 2 vCPUs, 3.75 GiB RAM, architecture: x86_64
- c4.xlarge: 4 vCPUs, 7.5 GiB RAM, architecture: x86_64
- c4.2xlarge: 8 vCPUs, 15 GiB RAM, architecture: x86_64
- c4.4xlarge: 16 vCPUs, 30 GiB RAM, architecture: x86_64
- c4.8xlarge: 36 vCPUs, 60 GiB RAM, architecture: x86_64
- c5.9xlarge: 36 vCPUs, 72 GiB RAM, architecture: x86_64
- c5.12xlarge: 48 vCPUs, 96 GiB RAM, architecture: x86_64
- c5.18xlarge: 72 vCPUs, 144 GiB RAM, architecture: x86_64
- c5.24xlarge: 96 vCPUs, 192 GiB RAM, architecture: x86_64
Dedicated GPU Options:
- Available instances: Shared, xlargeT4, 2xlargeT4, 4xlargeT4, 8xlargeT4, 12xlargeT4, 16xlargeT4, 32xlargeT4, xlargeA10G, 2xlargeA10G, 4xlargeA10G, 8xlargeA10G
- Exclusively for synthetic tasks (e.g., model training, synthetic data generation, anonymity tests).
- Credits are deducted based on the total time the instance is running.
Details of vCPUs, memory, and GPU details for each instance in Dedicated GPU Options:
- xlargeT4: 4 vCPUs, 16 GB memory, NVIDIA T4 16 GB GPU
- 2xlargeT4: 8 vCPUs, 32 GB memory, NVIDIA T4 16 GB GPU
- 4xlargeT4: 16 vCPUs, 64 GB memory, NVIDIA T4 16 GB GPU
- 8xlargeT4: 32 vCPUs, 128 GB memory, NVIDIA T4 16 GB GPU
- 12xlargeT4: 48 vCPUs, 192 GB memory, NVIDIA T4 16 GB GPU
- 16xlargeT4: 64 vCPUs, 256 GB memory, NVIDIA T4 16 GB GPU
- 32xlargeT4: 96 vCPUs, 384 GB memory, NVIDIA T4 16 GB GPU
- xlargeA10G: 4 vCPUs, 16 GB memory, NVIDIA A10G 24 GB GPU
- 2xlargeA10G: 8 vCPUs, 32 GB memory, NVIDIA A10G 24 GB GPU
- 4xlargeA10G: 16 vCPUs, 64 GB memory, NVIDIA A10G 24 GB GPU
- 8xlargeA10G: 32 vCPUs, 128 GB memory, NVIDIA A10G 24 GB GPU
Default Option in Dedicated CPU:
Users who do not wish to use a dedicated instance can opt for the default, which implies no custom workspace/project.
Shared GPU Option
- The shared GPU option (Arya's free shared infrastructure) is always visible.
- Usage is limited based on the user's plan (e.g., a plan allowing 6 hours of shared GPU usage will restrict further use once the limit is reached).
- Other GPU options will be charged against the user's credit balance.
Scenarios and Resource Management
- No Custom Workspace/Project:
- Users will only see serverless options, with credits deducted based on runtime and selected specifications.
- Custom Workspace, No Custom Project:
- All projects utilize the dedicated workspace server.
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.
- No Custom Workspace, Custom Project:
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.
- Custom Workspace, Custom Project:
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.
Different projects require diverse computational requirements. AryaXAI enables users to tailor server resources to match specific project demands. This customization enhances performance, scalability, and resource utilization.
Serverless ML and Dedicated Server Options
AryaXAI offers two primary types of computational resources: Serverless ML and Dedicated Servers. This flexibility enables users to optimize performance by selecting the most appropriate server size for their tasks. Users can also scale resources up or down as project needs evolve, ensuring optimal resource usage.
Serverless ML Options:
- Available sizes: Small, Medium, Large, X-Large, 2X-Large
- Applicable to all tasks except synthetic-related tasks.
- Credits are deducted based on runtime and selected specifications.
Serverless ML options ram size and CPU details:
- Small: 1 vCPU, 8 GB memory
- Medium: 2 vCPUs, 16 GB memory
- Large: 4 vCPUs, 32 GB memory
- Xlarge: 8 vCPUs, 60 GB memory
- 2Xlarge: 16 vCPUs, 98 GB memory
Dedicated CPU Options:
- Available instances: t3.medium, t3.large, t3.xlarge, t3.2xlarge, m4.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, m4.16xlarge, c4.large, c4.xlarge, c4.2xlarge, c4.4xlarge, c4.8xlarge, c5.9xlarge, c5.12xlarge, c5.18xlarge, c5.24xlarge
- Available for all tasks except synthetic-related tasks.
- Credits are deducted based on the total time the instance is running.
Below are the instance details for dedicated CPU Options including vCPUs, RAM, and architecture type:
- t3.medium: 2 vCPUs, 4 GiB RAM, architecture: x86_64
- t3.large: 2 vCPUs, 8 GiB RAM, architecture: x86_64
- t3.xlarge: 4 vCPUs, 16 GiB RAM, architecture: x86_64
- t3.2xlarge: 8 vCPUs, 32 GiB RAM, architecture: x86_64
- m4.large: 2 vCPUs, 8 GiB RAM, architecture: x86_64
- m4.xlarge: 4 vCPUs, 16 GiB RAM, architecture: x86_64
- m4.2xlarge: 8 vCPUs, 32 GiB RAM, architecture: x86_64
- m4.4xlarge: 16 vCPUs, 64 GiB RAM, architecture: x86_64
- m4.10xlarge: 40 vCPUs, 160 GiB RAM, architecture: x86_64
- m4.16xlarge: 64 vCPUs, 256 GiB RAM, architecture: x86_64
- c4.large: 2 vCPUs, 3.75 GiB RAM, architecture: x86_64
- c4.xlarge: 4 vCPUs, 7.5 GiB RAM, architecture: x86_64
- c4.2xlarge: 8 vCPUs, 15 GiB RAM, architecture: x86_64
- c4.4xlarge: 16 vCPUs, 30 GiB RAM, architecture: x86_64
- c4.8xlarge: 36 vCPUs, 60 GiB RAM, architecture: x86_64
- c5.9xlarge: 36 vCPUs, 72 GiB RAM, architecture: x86_64
- c5.12xlarge: 48 vCPUs, 96 GiB RAM, architecture: x86_64
- c5.18xlarge: 72 vCPUs, 144 GiB RAM, architecture: x86_64
- c5.24xlarge: 96 vCPUs, 192 GiB RAM, architecture: x86_64
Dedicated GPU Options:
- Available instances: Shared, xlargeT4, 2xlargeT4, 4xlargeT4, 8xlargeT4, 12xlargeT4, 16xlargeT4, 32xlargeT4, xlargeA10G, 2xlargeA10G, 4xlargeA10G, 8xlargeA10G
- Exclusively for synthetic tasks (e.g., model training, synthetic data generation, anonymity tests).
- Credits are deducted based on the total time the instance is running.
Details of vCPUs, memory, and GPU details for each instance in Dedicated GPU Options:
- xlargeT4: 4 vCPUs, 16 GB memory, NVIDIA T4 16 GB GPU
- 2xlargeT4: 8 vCPUs, 32 GB memory, NVIDIA T4 16 GB GPU
- 4xlargeT4: 16 vCPUs, 64 GB memory, NVIDIA T4 16 GB GPU
- 8xlargeT4: 32 vCPUs, 128 GB memory, NVIDIA T4 16 GB GPU
- 12xlargeT4: 48 vCPUs, 192 GB memory, NVIDIA T4 16 GB GPU
- 16xlargeT4: 64 vCPUs, 256 GB memory, NVIDIA T4 16 GB GPU
- 32xlargeT4: 96 vCPUs, 384 GB memory, NVIDIA T4 16 GB GPU
- xlargeA10G: 4 vCPUs, 16 GB memory, NVIDIA A10G 24 GB GPU
- 2xlargeA10G: 8 vCPUs, 32 GB memory, NVIDIA A10G 24 GB GPU
- 4xlargeA10G: 16 vCPUs, 64 GB memory, NVIDIA A10G 24 GB GPU
- 8xlargeA10G: 32 vCPUs, 128 GB memory, NVIDIA A10G 24 GB GPU
Default Option in Dedicated CPU:
Users who do not wish to use a dedicated instance can opt for the default, which implies no custom workspace/project.
Shared GPU Option
- The shared GPU option (Arya's free shared infrastructure) is always visible.
- Usage is limited based on the user's plan (e.g., a plan allowing 6 hours of shared GPU usage will restrict further use once the limit is reached).
- Other GPU options will be charged against the user's credit balance.
Scenarios and Resource Management
- No Custom Workspace/Project:
- Users will only see serverless options, with credits deducted based on runtime and selected specifications.
- Custom Workspace, No Custom Project:
- All projects utilize the dedicated workspace server.
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.
- No Custom Workspace, Custom Project:
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.
- Custom Workspace, Custom Project:
- Users can choose between serverless and dedicated options.
- For dedicated options, credits are deducted based on the total time the dedicated instance runs, not task runtime.