Frequently Asked Questions
Frequently Asked Questions
MNX has core features that handle the virtualization and management of layer 2 and layer 3 networks, and both can be broken down in two basic types:
Networks defined by the data center operator. these can be networks that have internet-routable IP ranges (so you can connect your applications to the public internet), or they might have a private address space that is not reachable by the public internet. These networks are typically shared by multiple customers (shared layer 2 and 3), but it's also possible for the operator to create truly isolated networks that can be used and accessed by a single user (private layer 2 and 3).
Networks defined by the user. These networks are sometimes called overlay or VXLAN networks, and they are private to the user who creates them. These network fabrics offer a convenient way to create isolated (private layer 2 and 3) networks to securely connect the internal components of your applications. At this time, each user can create up to 1024 different networks, so you can easily create separate networks for each application, and isolate staging instances of the application on their own network separated from production.
Importantly, these networking features are available to all the instance types MNX supports, including Docker containers, infrastructure containers, and hardware VMs.
Each instance, including Docker containers, infrastructure containers, and hardware VMs, gets one or more IP addresses on different networks. You can check the IP address(es) assigned to each instance in a number of different ways:
Sign into the MNX portal → Compute where you'll see a full list of all of the containers and VMs you have running on MNX.io.
By clicking on the name of an instance, you’ll see a summary of all of those instance details. In the first section, labeled Network Cards, several pieces of information will appear. Among the data, you can get the instance IP addresses.
The Triton CLI tool is a fast and convenient way to mange infrastructure on MNX. You can get a list of your instances with triton instances
. To get the IP address of your instance using triton
, run the following command:
This will give you the primaryIp
address for your instance. On MNX, primaryIp
is often a public IP address on your instance, but if you didn't request a public IP address (see below for how to request or not public IPs for your instance), it will typically1 be the next most public IP address.
There may be more than one IP listed for your instance in an ips
array. To get all of these IP addresses and more information about your instance, run:
That command will return information about your instance, including the image it is running, instance state, DNS names, and the instance IP(s). It will look something like this:
If you install this JSON-parser with npm install -g json
, then you can extract the primaryIP
address from the JSON output, instead of having to read the entire array:
You can also get the primaryIp
address when listing instances. For example:
It is possible to find your primary IP address for your Docker containers with Docker CLI. You can get your list of containers (to get the <container>
name or ID) using triton-docker ps
.
This will output a large JSON array of information about your Docker container, and the primary IP address will be buried inside NetworkSettings
→ IPAddress
.
If you want to get just the IP address, this command uses Go to parse out that information:
If you're inside an instance, within a shell, you can use either ifconfig -a
or ip addr
to show the IP address. The command in use depends on the base OS/distro.
In the example above, from ifconfig -a
, eth0
is connected to a private overlay network and is best used for internal connections between application components. The eth1
interface is connected to the public internet with a routable public IP address, 64.30.128.116
.
Network Interface Controllers (NICs) connect your instances to a computer network. Each of your instances has one or more NICs, each connected to particular networks. This is a key feature of network virtualization and isolation in MNX: the virtual NICs maximize performance, security, and convenience. Docker containers, for example, can be directly connected to the public internet on their own NIC, and you'll never need to worry about port collisions among multiple containers trying to use ports 80 or 443, or other common ports.
Each NIC can give you access to a different network, allowing you to create the exact network topology you need to isolate your applications while still connecting the components.
We charge bandwidth for interfaces with a public IP address and an external network. So if you have an application and a database which communicate over external interfaces, you will be charged. If those instances communicate over internal interfaces with only private IP addresses, you are not charged.
To determine which interfaces, if any, are public, you'll need your container name or UUID, which you can get from triton inst ls
. Use that name to get information to get the networks attached to your instance with triton inst get
.
Using the networks' IDs, run triton network get <network>
to determine if the network is public or private.
The first network, dcef4216-d34a-44fd-bf83-635172bf9e46
, is a private network, while a4294278-a494-4f7d-b5d6-983c70729c58
is a public network.
Want a shortcut? Find the networks associated with an instance with this one line command:
Note: Instances can be provisioned with a public network by selecting it at provision time via the web portal or with the triton
CLI tool. For Docker instances, you can be give public network access using the -p
or -P
flag.
To view all available networks, run triton network list
.
Yes. You can choose the networks you want to connect your instance to when you create it, and add or remove network connections (NICs) while the instance is running.
By default, each instance will be connected to a private fabric network. It is also possible to connect instances to a public network. Exactly what type of private network and whether or not the instance gets a public network depends on the instance type.
Private fabric networks are a good choice for connecting the components of your application, since their isolation from the public internet and other users in the data center (for user-defined networks) can improve the security of those application components. For example, databases are typically connected just to other application components in the data center and not exposed on the public internet.
Docker
Defaults to your default fabric network
Can specify one or more different fabric networks when using
triton-docker run
NICs and networks can be added and removed after the instance is started[^2]
Can get an interface and IP on the operator-defined "public" network by using the
-p
argument totriton-docker run
Infrastructure containers and VMs
Defaults to the operator-defined shared private network
Can get one or more user-defined fabric networks at start time
NICs and networks can be added and removed after the instance is started
Gets an interface and IP on the operator-defined "public" network by default
MNX user-defined networks (also called "fabrics" and "overlay networks") are built using VXLAN and 802.1Q industry standards. Check out the docs.
Every account in MNX starts with a private user-defined network named "default", which is the default network for Docker containers and an optional network for other instances. To list the networks available, go to the MNX portal → Networks or use the Triton CLI command triton networks
.
Your container may also be connected to a public network, reachable over the internet. The IP address the public network is given varies based on the data center. By default, all containers and VM instances are given public VNICs. Docker containers do not have a public VNIC unless you request it with the -p
or -P
argument to triton-docker run ...
.
Docker containers on MNX only get interfaces and IP addresses on the public internet if you request one with the -p
or -P
argument to triton-docker run ...
.
Infrastructure containers and hardware virtual machines get public IP addresses by default.
Use the instructions above to find the IP address(es) for your instances.
Applications often have many components or services, only a small portion of which should be exposed on the public internet. You want certain components such as a load balancer and the front-end design to be easily seen by the user. However, databases and certain back-end components should most often be hidden from the public for the safety of your application.
Public IP addresses are optional. They're on by default for infrastructure containers and hardware VMs, but you can create containers without them if you want. For Docker containers, they're off by default, and you have to explicitly ask for a public IP address using the -p
or -P
argument in your triton-docker run...
.
Firewalls can help protect your instances from network attacks by blocking (or allowing) traffic based on a set of rules you can define. This can be especially valuable for protecting instances on public or shared networks. MNX Cloud Firewall makes firewall management easy. In some cases, it's even automatic!
MNX Cloud Firewall can automatically apply firewall rules based on instance tags or Docker labels, making it easy to apply or change firewall policies.
And, for Docker instances, MNX Cloud Firewall will automatically set rules that block traffic to all the ports on a public network except those specified in the -p
argument in your triton-docker run...
.
You can modify these rules in your terminal with triton fwrule
Last updated