Frequently Asked Questions
Find answers to common questions about our cloud services and platform
No results found
Try adjusting your search terms or browse all categories below
Compute & Infrastructure
Most GPU cloud virtual machines support popular Linux distributions such as Ubuntu, AlmaLinux, and Rocky Linux, which are widely used for AI/ML, HPC, and compute-intensive workloads.
No. The operating system comes preinstalled on the GPU Virtual Machine and cannot be installed manually.
You can select an operating system from the list of supported OS images available during deployment. If your preferred OS is not listed, custom OS installation is not supported on the GPU VM.
Yes. You will have full root access. You can install and configure any software on your GPU instance.
Virtual Machine backups and point-in-time recovery are not included by default as part of the standard VM service, and complimentary backups are not provided.
Under Verda's shared responsibility model, customers are responsible for implementing and managing their own backup and disaster recovery solutions. This includes protecting VM data and ensuring systems can be restored when required.
For GPU Virtual Machines, customers are responsible for the configuration and management of the operating system, networking, firewall rules, access control, data encryption, backups, disaster recovery, and overall service continuity beyond the provider's managed scope.
In the future, paid Continuous Data Protection (CDP) backup services may be made available for critical workloads. Availability will depend on the operating system and whether the VM configuration supports installation of the required backup agent. Customers interested in paid backup options should contact the support team for further details.
Upgrading the GPU within an existing Virtual Machine is not supported. To move to a higher GPU model, the Virtual Machine must be rebuilt using the new GPU configuration.
Before proceeding with the upgrade, please ensure that all data, configurations, and applications on the existing VM are fully backed up, as the rebuild process will result in the replacement of the current Virtual Machine
Automatic or manual vertical and horizontal scaling is not supported for GPU Virtual Machines. To change GPU resources or capacity, the Virtual Machine must be rebuilt using the preferred configuration.
This functionality may be introduced in the future; however, there is no confirmed timeline or ETA at this time.
Yes. Each GPU Virtual Machine is provisioned with a fully dedicated physical NVIDIA GPU.
A "dedicated GPU" means the entire physical GPU is exclusively assigned to a single VM using PCI passthrough. The GPU is not shared, sliced, or time-shared with any other customer or VM.
While CPU, memory, and networking resources are virtualized on the host server, the GPU itself is reserved solely for your VM for the duration of its runtime.
Yes. A physical server may host multiple VMs; however, the number of GPU VMs on a server never exceeds the number of physical GPUs installed.
For example, a server with 8 physical GPUs can run up to 8 GPU-enabled VMs, each receiving one full GPU. Additional VMs on the same server are CPU-only and do not access GPU resources.
No. We do not use vGPU, GPU slicing, or fractional GPU allocation for our GPU Virtual Machines.
Each VM receives the entire GPU, including full VRAM, compute cores, and clock speeds, delivering performance comparable to bare-metal GPU usage.
GPU Virtual Machine
- Dedicated physical GPU
- Shared underlying CPU, RAM, and networking hardware
- Hypervisor-based virtualization
- Flexible provisioning and hourly billing
Bare-Metal GPU Server
- Entire physical server dedicated to a single customer
- Full control of CPU, RAM, GPUs, storage, and networking
- No hypervisor overhead
- Ideal for maximum performance, low-latency workloads, or custom system configurations
CUDA is a platform developed by NVIDIA that allows software to use the GPU for computing tasks. Most AI and ML tools require CUDA.
Below is a comprehensive list of 150 primary-level questions that a GPU Virtual Machine (GPU VM) client may ask.
The questions are grouped for clarity into Technical and Billing / Commercial categories and are written in simple, client-friendly language suitable for FAQs, sales discussions, or onboarding documents.
A GPU virtual machine is a cloud-based virtual server that includes one or more GPUs to accelerate compute-intensive workloads such as AI, machine learning, rendering, and simulations.
A GPU VM uses graphics processors optimized for massive parallel processing, while a CPU VM relies only on general-purpose processors, making GPU VMs significantly faster for compute-heavy tasks.
GPU VMs are ideal for AI/ML training and inference, deep learning, data analytics, video rendering, scientific simulations, and high-performance computing workloads.
Yes. GPU VMs are widely used for rendering, video encoding, 3D modeling, financial simulations, scientific research, and other parallel compute workloads beyond AI.
Yes. Most GPU cloud platforms offer pre-configured images, documentation, and ready-to-use environments that make it easy for beginners to get started.
No. A GPU VM works like a standard Linux or Windows server, allowing you to run any compatible software without requiring machine learning expertise.
Yes. Existing applications can run on a GPU VM, and GPU acceleration can be enabled if the application supports CUDA, OpenCL, or similar frameworks.
This depends on the plan. Many providers offer fully dedicated GPUs, while some provide shared or partitioned GPUs for cost-efficient workloads.
GPU VMs can typically be launched within minutes using pre-built images and automated provisioning.
Yes. Modern GPU virtualization delivers near bare-metal performance, making it suitable for most production and research workloads.
Yes. GPU virtual machines are available around the clock.
Yes. GPU VMs eliminate upfront hardware costs, reduce maintenance, provide instant scalability, and offer global availability, making them a strong alternative to on-premise GPU infrastructure.
No. You can not upgrde RAM or vCPU of your VM. We need to rebuild entire VM with required resorces
We offer a range of NVIDIA GPUs, including popular models such as A100, H100, H200, L40/L40S, and RTX-series GPUs. Available models may vary by region and instance type, and customers can choose the GPU model based on their workload requirements at the time of VM creation.
Available GPU memory (VRAM) depends on the underlying GPU architecture and configuration. Entry-level GPUs typically offer 16-32 GB VRAM, while Standard GPUs such as A-series and H-series provide 48 GB, 80 GB, or 140+ GB per GPU. The Enterprise Grade GPU will have 140+ GB of RAM. Multi-GPU instances allow workloads to scale across multiple GPUs with high aggregate VRAM.
GPU model selection is available during instance provisioning. The available GPU options depend on the selected instance type, region, and current capacity.
CUDA compute capability depends on the selected NVIDIA GPU model and generation. Newer GPUs offer higher compute capability, enabling improved performance and access to advanced CUDA features.
Yes. Multi-GPU configurations are supported, allowing users to run workloads across multiple GPUs within a single virtual machine, subject to instance type and availability.
NVLink support is available on select GPU models and multi-GPU configurations. Availability depends on the GPU architecture and instance design.
You can check whether NVLink is available and active on your system using the methods below. These steps apply to most Linux-based GPU VMs and bare-metal servers.
nvidia-smi nvlink -s
GPU clock speed varies by GPU model and generation. Detailed specifications are available in the instance or GPU documentation provided at the time of selection. Below are clock speeds of various known GPU models.
| GPU Model | Typical Base Clock | Typical Boost Clock |
|---|---|---|
| NVIDIA H200 | ~1.4 GHz | ~1.8 GHz |
| NVIDIA H100 (SXM/PCIe) | ~1.3-1.4 GHz | ~1.7-1.8 GHz |
| NVIDIA B200 (Blackwell) | ~1.5 GHz | ~1.9+ GHz |
| NVIDIA A100 (SXM/PCIe) | ~1.2-1.4 GHz | ~1.4-1.6 GHz |
| NVIDIA V100 | ~1.2 GHz | ~1.5 GHz |
| NVIDIA L40S | ~1.1 GHz | ~2.4 GHz |
| NVIDIA A40 | ~1.1 GHz | ~1.7 GHz |
| NVIDIA A6000 | ~1.4 GHz | ~1.8 GHz |
| RTX 6000 (Ada) | ~1.5 GHz | ~2.5 GHz |
| NVIDIA A10 | ~1.1 GHz | ~1.7 GHz |
| NVIDIA L4 | ~1.0 GHz | ~2.5 GHz |
| RTX 4090 | ~2.2 GHz | ~2.5-2.6 GHz |
You can check the current GPU clock speed using the nvidia-smi utility. Run the following command on your GPU Virtual Machine:
nvidia-smi --query-gpu=clocks.current.graphics,clocks.current.sm,clocks.current.memory --format=csvThis command displays the current graphics, SM (compute), and memory clock speeds of the GPU. Clock speeds may vary in real time based on workload, power limits, and thermal conditions.
MIG allows a single physical NVIDIA GPU (like an H100 or A100) to be partitioned into multiple smaller, isolated GPU instances.
Each partition has its own compute cores, memory, and bandwidth, so multiple workloads can run independently on the same physical GPU.
MIG support depends on the underlying GPU model. GPUs that support MIG can be partitioned into smaller, isolated GPU instances where enabled by the platform.
You can check if MIG (Multi-Instance GPU) is available on your GPU Virtual Machine using NVIDIA's tools. Here's a step-by-step guide:
- Check if the GPU supports MIG nvidia-smi -L
- Check if MIG is enabled nvidia-smi -i 0 -q | grep -i mig
- Check the MIG instances (if any) nvidia-smi mig -lgi
Yes. Customers can run standard benchmarking tools and frameworks such as CUDA, TensorFlow, or PyTorch benchmarks on their GPU instances.
GPU instances are typically provisioned with dedicated GPU resources. In some cases, shared or partitioned GPUs may be offered for cost-efficient workloads.
Yes. High-memory and high-performance GPUs are well suited for training large AI and deep learning models.
Yes. GPUs are commonly used for rendering, video encoding/decoding, graphics workloads, and other compute-intensive tasks in addition to AI workloads.
Tensor Cores are available on most modern NVIDIA GPUs. These cores significantly accelerate AI training, inference, and mixed-precision compute workloads.
You can check Tensor Core support on a GPU by executing the following command
nvidia-smi -q | grep -i "tensor"
Tensor Core Generation Comparison Table
| GPU Architecture | Example GPUs | Tensor Core Generation | Key Precision Formats | Primary Benefit |
|---|---|---|---|---|
| Volta | V100 | 1st Gen | FP16 | First AI acceleration with Tensor Cores |
| Turing | RTX 6000 | 2nd Gen | FP16, INT8, INT4 | AI + graphics workloads |
| Ampere | A100, A40, A10, A6000 | 3rd Gen | FP16, BF16, TF32, INT8 | Strong AI training + inference |
| Ada Lovelace | L4, L40S, RTX 4090 | 4th Gen | FP16, BF16, TF32, INT8 | Optimized inference & rendering |
| Hopper | H100, H200 | 4th Gen (Enhanced) | FP16, BF16, TF32, FP8 | Large-scale AI training |
| Blackwell | B200 | 5th Gen | FP4, FP8, FP16, BF16 | Massive AI model scaling |
GPU pass-through is a virtualization technology that allows a physical GPU installed on a host machine to be directly assigned to a virtual machine (VM). This means the VM can access the GPU hardware almost as if it were running on a bare-metal server, with full access to:
- GPU compute cores
- VRAM (video memory)
- Clock speeds
- PCIe bandwidth
Unlike shared or virtualized GPUs (vGPU), GPU pass-through gives exclusive control of the GPU to a single VM, ensuring maximum performance for workloads like AI training, rendering, and scientific computing.
GPU pass-through is typically available on dedicated GPU instances, providing near bare-metal access to GPU hardware. Availability depends on the virtualization model.
GPU usage is flexible and based on Pay-as-you-go model. A continuous usage is allowed while the instance is running.
Currently, we operate on a pay-as-you-go billing model, allowing you to use your GPU Virtual Machine continuously for as long as needed.
We may introduce long-term reservation plans in the future to provide more flexibility and cost optimization.
GPU availability depends on the data center region and current capacity. The regions where GPUs can be deployed will be displayed at the time you launch a GPU Virtual Machine, allowing you to select from the available locations.
Our GPU Virtual Machines are primarily powered by AMD EPYC 7000 or 9000 series processors. In some configurations, Intel Xeon Gold processors may also be used, depending on the server type and availability.
The number of CPU cores available with a GPU Virtual Machine depends on the region and the GPU model selected. The exact CPU core allocation for your chosen configuration will be displayed at the time you launch the instance.
No. The CPU and RAM allocation is predefined for each GPU instance type. The exact configuration will be displayed when you select the instance during the launch process.
The maximum RAM depends on the GPU model and the data center location. The exact RAM allocation for each instance will be displayed when you select the GPU Virtual Machine during the launch process.
GPU Virtual Machines use high-performance storage, which may be SSD, NVMe, or a combination of both, depending on the instance type and configuration. The specific storage type will be displayed when you select the instance during launch.
The default storage allocation depends on the GPU instance type and configuration. The exact size is displayed when you select the instance during launch.
Yes. Additional storage volumes can typically be attached to your GPU VM, subject to available capacity and instance limits.
Yes. The storage associated with your GPU Virtual Machine is persistent and retains data.
Storage IOPS performance varies based on the storage tier (SSD, NVMe, or hybrid) and the selected instance configuration. Performance is optimized to align with the capabilities of the underlying hardware and workload requirements.
VM snapshot functionality is not currently supported. However, where available for a specific GPU model and location, you can attach a separate volume and store your data on it.
Yes. But This facility may not be available on all locations. The option to add multiple volumes will be visible if this option is available.
The default storage included with a GPU VM comes with a fixed capacity. However, you can purchase additional storage and configure the size based on your requirements.
No. Windows is not a supported OS on GPU VMs.
No. You need to install it separately.
No. Cuda and cuDNN are not pre-configured.
We provide GPU VMs with a standard Linux distribution and full root-level SSH access. However, AI/ML frameworks and toolchains (such as CUDA, cuDNN, TensorFlow, or PyTorch) are not pre-installed. This allows you the flexibility to set up and customize the AI/ML environment based on your specific requirements.
Yes. You will have full root (administrator) access to the GPU VM, allowing you to install, configure, and manage any custom software or dependencies required for your workload.
Yes. Docker is supported. You can install a docker on the VM and then the docker image.
Yes. You can deploy and manage Kubernetes clusters using GPU VMs. With full root access, you can install Kubernetes components and configure GPU support (such as NVIDIA drivers and device plugins) to run GPU-accelerated containers and workloads.
No. Python is not pre-installed by default. However, you can easily install Python on your VM and configure it according to your requirements.
Yes. Jupyter Notebooks are supported and can be installed on the GPU VM. With full administrative access, you can set up and run Jupyter Notebook or JupyterLab according to your requirements.
TensorFlow and PyTorch are fully supported and can be installed on the GPU VM with full root (administrator) permissions, allowing complete control over setup and configuration.
No. We don't provide marketplace images.
Security, Compliance & Privacy
You will have to access the VM using the SSH key provided by you at the time of deploying an instance. Make sure you login first time from the system whose SSH key is given to GPU VM using the following command
ssh root@YOUR_SERVER_IP
Once you are logged in, you can allow root user login by changing the following SSH config file variables
Open SSH config file available at /etc/ssh/sshd_config
Change the following variables
PermitRootLogin prohibit-password => PermitRootLogin yes
Uncomment PasswordAuthenticatio
#PasswordAuthentication yes => PasswordAuthentication yes
Go to directory
/etc/ssh/sshd_config.d/
Open the file 60-cloudimg-settings.conf :
Change PasswordAuthentication no => PasswordAuthentication yes
Restart SSH service:
sudo systemctl restart ssh
Yes. SSH is enabled by default. You need to enable root login and password authentication.
RDP is not enabled by default. You can install xrdp or other similar tools.
One Public IP is provided with each virtual machine
No. Additional IPs are not provided.
No. Private networking is not available.
Firewalls are enabled at the provider and host levels to protect the infrastructure. However, no firewall is enabled inside your VM by default - you can configure and manage VM-level firewall rules as needed.
Our infrastructure is protected by upstream, data-center-level DDoS mitigation designed to safeguard the core network and platform from moderate volumetric attacks.
However, this protection is applied at the provider and host levels and does not include dedicated or configurable DDoS protection for individual virtual machines.
With full root access, you have complete control over inbound and outbound ports within your Linux virtual machine. You can configure firewall rules using modern, free firewall frameworks such as nftables or firewall management tools like firewalld and UFW, depending on your Linux distribution, to open, restrict, or manage ports as required by your applications.
IPv6 support is currently not available, but it may be considered for future platform updates.
You can install and configure VPN software on your GPU virtual machine with full root access, allowing you to establish secure VPN connections as needed.
Latency depends primarily on network proximity. We offer GPU virtual machines across multiple geographic locations, allowing you to deploy workloads closer to your users, data sources, or inference endpoints. By selecting a location nearest to your target audience or data pipeline, you can minimize network round-trip times and achieve lower latency for latency-sensitive AI training and inference workloads.
Our GPU virtual machines are hosted on highly available infrastructure designed to deliver reliable service. The platform targets high uptime through redundant networking, power, and hardware at the data-center and host levels.
While we continuously monitor and maintain the infrastructure to minimize downtime, uptime commitments and service credits (if applicable) are defined in our Service Level Agreement (SLA) and Terms & Conditions.
Yes. GPU virtual machines are available across multiple geographic locations. The available regions for each GPU model and resource configuration are displayed on our website, allowing you to select the location that best matches your target audience or workload requirements.
Data security for GPU virtual machines is built on a multi-layered, shared responsibility model combining robust infrastructure protections and customer controls. Across providers such as Verda, Cudo Compute, Sesterce, Hyperstack, and similar GPU cloud platforms, the following key security measures help protect your data and workloads:
Infrastructure and Compliance Standards
- Providers operate in secure, purpose-built data centers with physical safeguards such as biometric access controls, surveillance, and restricted-access facilities to prevent unauthorized access to hardware.
- Many providers comply with recognized international standards such as ISO 27001, GDPR, HIPAA, SOC-type audits, and other regional compliance frameworks, demonstrating structured information security management and data protection practices.
Network & Access Security
- Industry standard practices include advanced encryption for data in transit (TLS) and access controls such as SSH key management, role based access control (RBAC), multi factor authentication (MFA), and secure API access.
- Network defenses like firewalls, segmentation, and intrusion detection systems help defend against unauthorized access or attacks at the network layer.
Data Protection
- Data at rest on storage systems is typically protected by encryption, helping ensure that stored data cannot be read if media is accessed without authorization.
- Providers often offer features to help you manage data sovereignty and residency requirements by hosting data in specific regions to comply with local laws.
Monitoring, Logging & Detection
- Continuous monitoring and logging of infrastructure, network activity, and access events help detect anomalies, support audit trails, and enable faster incident response.
- Some platforms provide security logging that can integrate with customer SIEM tools for deeper visibility.
Shared Responsibility & Customer Controls
- Providers secure the underlying physical infrastructure, hypervisor, and networking layers, while customers are responsible for securing the guest operating system, application configuration, encryption keys, user access policies, and firewall rules inside their GPU VM. This shared responsibility model is standard across cloud services.
Provider Examples
- Verda: ISO 27001 certified and GDPR compliant, with documented compliance and physical security measures at data-center facilities.
- Cudo Compute: Emphasizes advanced encryption, robust access controls, continuous monitoring, and compliance with standards like GDPR, ISO 27001, HIPAA, and others.
- Sesterce: While specific security controls aren't extensively published, the platform emphasizes secure data centers and secure SSH access; localized offerings with EU data residency help meet regulatory and data protection needs.
- Hyperstack: Public documentation focuses on infrastructure and performance; secure hosting in modern data centers and logical isolation can be expected based on industry norms. (Note: specific details aren't publicly published)
In summary, GPU VM providers use industry-standard encryption, physical and network security controls, compliance frameworks, monitoring, and shared-responsibility models to ensure data security. You can enhance security further by configuring your VM's internal access controls, encryption, and application-level protections.
Yes. GPU VM
Yes. GPU VM
Your instance will remain down during the maintenance on the main nodes. A prior notice will be provided with the date and time of the maintenance.
If the VM is shutdown, automatic VM restart is not available. You will have option to restart the VM from the client portal.
Currently, automated VM start/stop scheduling is not supported. All VM lifecycle actions must be managed manually.
Yes. 24/7 technical support via ticket is available.
Yes. We assist with instance creation and initial setup. However, all configurations, software installations, and activities within the VM are managed by you.
Yes. We have documentation and tutorials on various topics that can help you.
Yes. Live chat support is available.
You can report the technical issues using the helpdesk option given in the
Billing & Pricing
Yes. Refunds are available for any unused balance within 60 days from the date of deposit. Requests submitted after this period will not be considered.
We accept all internationally valid debit and credit cards. Additional payment options, such as PayPal, may be introduced in the future - please refer to our website for the latest updates.
If your account balance reaches zero or goes negative, your instances will be stopped or hibernated. If payment is not made within 48 hours, the instances will be permanently deleted. After deletion, no backups or data recovery will be possible.
GPU VM pricing follows a pay-as-you-go model. You are charged based on the hourly rate multiplied by the total number of hours the GPU VM is in use. The hourly rate includes the GPU, vRAM, vCPU, system RAM, base storage, and IP address. Any additional storage volumes you create are billed separately.
Pricing is based on hourly usage, with charges applied only for the hours the resources are consumed.
Yes. The Pay-as-you-go pricing is available.
Currently, no discounts are available. We may introduce it in future.
We follow a pay-as-you-go pricing model, so you're billed only for the resources you actually use. There is no minimum usage requirement or long-term commitment.
Yes. CPU and RAM are included in the price.
Default storage that comes with VM is not charged separately.
Yes. Bandwidth charges are included.
Yes. Inbound traffic is free.
The inbound and outbound bandwidth are part of the hourly cost of the VM.
No. Public IP is not charged.
No. There is no setup or activation fee.
Currently, we don't provide snapshot backups.
No. We don't provide a free trial.
No. Promotional credits are not available.
Tax is not separately charged. It is part of the hourly pricing.