Frequently Asked Questions

Frequently Asked Questions

MNX has core features that handle the virtualization and management of layer 2 and layer 3 networks, and both can be broken down in two basic types:

  • Networks defined by the data center operator. these can be networks that have internet-routable IP ranges (so you can connect your applications to the public internet), or they might have a private address space that is not reachable by the public internet. These networks are typically shared by multiple customers (shared layer 2 and 3), but it's also possible for the operator to create truly isolated networks that can be used and accessed by a single user (private layer 2 and 3).

  • Networks defined by the user. These networks are sometimes called overlay or VXLAN networks, and they are private to the user who creates them. These network fabrics offer a convenient way to create isolated (private layer 2 and 3) networks to securely connect the internal components of your applications. At this time, each user can create up to 1024 different networks, so you can easily create separate networks for each application, and isolate staging instances of the application on their own network separated from production.

Importantly, these networking features are available to all the instance types MNX supports, including Docker containers, infrastructure containers, and hardware VMs.

Each instance, including Docker containers, infrastructure containers, and hardware VMs, gets one or more IP addresses on different networks. You can check the IP address(es) assigned to each instance in a number of different ways:

Sign into the MNX portalCompute where you'll see a full list of all of the containers and VMs you have running on MNX.io.

By clicking on the name of an instance, you’ll see a summary of all of those instance details. In the first section, labeled Network Cards, several pieces of information will appear. Among the data, you can get the instance IP addresses.

The Triton CLI tool is a fast and convenient way to mange infrastructure on MNX. You can get a list of your instances with triton instances . To get the IP address of your instance using triton, run the following command:

triton inst ip <instance>

This will give you the primaryIp address for your instance. On MNX, primaryIp is often a public IP address on your instance, but if you didn't request a public IP address (see below for how to request or not public IPs for your instance), it will typically1 be the next most public IP address.

There may be more than one IP listed for your instance in an ips array. To get all of these IP addresses and more information about your instance, run:

triton instance get <instance>

That command will return information about your instance, including the image it is running, instance state, DNS names, and the instance IP(s). It will look something like this:

{
    "id": "ea66e367-031b-47c4-8a56-3649becb789f",
    "name": "<instance name>",
    "type": "smartmachine",
    "brand": "lx",
    "state": "running",
    "image": "6e9f2ba8-0ec3-3b9e-86a9-c0b84f0d042a",
    "ips": [
        "192.168.128.30",
        "72.2.114.213"
    ],
   [...]
    "primaryIp": "72.2.114.213"
}

If you install this JSON-parser with npm install -g json, then you can extract the primaryIP address from the JSON output, instead of having to read the entire array:

triton instance get -j <instance> | json primaryIp

You can also get the primaryIp address when listing instances. For example:

$ triton instances -l
ID                                    NAME   IMG                 BRAND   PACKAGE          STATE    FLAGS  PRIMARYIP       CREATED
40ea080c-8436-4fa3-9048-16b31ab063f0  gloom  base-64-lts@15.4.1  joyent  g4-highcpu-256M  running  -      165.225.151.17  2016-05-16T19:43:58.304Z

$ triton instances -o name,primaryIp
NAME                     PRIMARYIP
wp_nginx_1               165.225.156.123
wp_nginx_2               165.225.156.48
wp_mysql_1
wp_mysql_2

It is possible to find your primary IP address for your Docker containers with Docker CLI. You can get your list of containers (to get the <container> name or ID) using triton-docker ps.

triton-docker inspect <container>

This will output a large JSON array of information about your Docker container, and the primary IP address will be buried inside NetworkSettingsIPAddress.

If you want to get just the IP address, this command uses Go to parse out that information:

triton-docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container>

If you're inside an instance, within a shell, you can use either ifconfig -a or ip addr to show the IP address. The command in use depends on the base OS/distro.

eth0      Link encap:Ethernet  HWaddr 90:b8:d0:7d:9f:e5
          inet addr:192.168.128.7  Bcast:192.168.131.255  Mask:255.255.252.0
          inet6 addr: fe80::92b8:d0ff:fe7d:9fe5/10 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:8500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:22 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:138 (138.0 B)  TX bytes:1480 (1.4 KB)

eth1      Link encap:Ethernet  HWaddr 90:b8:d0:20:27:06
          inet addr:64.30.128.116  Bcast:64.30.129.255  Mask:255.255.254.0
          inet6 addr: fe80::92b8:d0ff:fe20:2706/10 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:30142 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13926 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:42094996 (42.0 MB)  TX bytes:983433 (983.4 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MULTICAST  MTU:8232  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

In the example above, from ifconfig -a, eth0 is connected to a private overlay network and is best used for internal connections between application components. The eth1 interface is connected to the public internet with a routable public IP address, 64.30.128.116.

Network Interface Controllers (NICs) connect your instances to a computer network. Each of your instances has one or more NICs, each connected to particular networks. This is a key feature of network virtualization and isolation in MNX: the virtual NICs maximize performance, security, and convenience. Docker containers, for example, can be directly connected to the public internet on their own NIC, and you'll never need to worry about port collisions among multiple containers trying to use ports 80 or 443, or other common ports.

Each NIC can give you access to a different network, allowing you to create the exact network topology you need to isolate your applications while still connecting the components.

We charge bandwidth for interfaces with a public IP address and an external network. So if you have an application and a database which communicate over external interfaces, you will be charged. If those instances communicate over internal interfaces with only private IP addresses, you are not charged.

To determine which interfaces, if any, are public, you'll need your container name or UUID, which you can get from triton inst ls. Use that name to get information to get the networks attached to your instance with triton inst get.

$ triton inst get <instance>
{
    "id": "faa1e2e8-25fc-4579-8257-f99ccfc7b0af",
    "name": "angry_fermi",
    "type": "smartmachine",
    [...]
    "networks": [
        "dcef4216-d34a-44fd-bf83-635172bf9e46",
        "a4294278-a494-4f7d-b5d6-983c70729c58"
    ],
    [...]
}

Using the networks' IDs, run triton network get <network> to determine if the network is public or private.

$ triton network get dcef4216-d34a-44fd-bf83-635172bf9e46
{
    "id": "dcef4216-d34a-44fd-bf83-635172bf9e46",
    "name": "My-Fabric-Network",
    "public": false,
    "fabric": true,
    "gateway": "192.168.128.1",
    "internet_nat": true,
    "provision_end_ip": "192.168.131.250",
    "provision_start_ip": "192.168.128.5",
    "resolvers": [
        "8.8.8.8",
        "8.8.4.4"
    ],
    "subnet": "192.168.128.0/22",
    "vlan_id": 2
}

$ triton network get a4294278-a494-4f7d-b5d6-983c70729c58
{
    "id": "a4294278-a494-4f7d-b5d6-983c70729c58",
    "name": "JoyentSDC-72.2.124.0/22",
    "public": true,
    "description": "JoyentSDC-72.2.124.0/22"
}

The first network, dcef4216-d34a-44fd-bf83-635172bf9e46, is a private network, while a4294278-a494-4f7d-b5d6-983c70729c58 is a public network.

Want a shortcut? Find the networks associated with an instance with this one line command:

$ triton inst get <instance> | json networks | json -a | xargs -L1 -n1 triton network get
{
    "id": "2065ac74-8d04-4077-8682-7feffb0d7dee",
    "name": "Joyent-SDC-64.30.128.0/23",
    "public": true,
    "description": "Joyent-SDC-Public-Pool-64.30.128.0/23"
}
{
    "id": "43b174ba-03cd-48bb-8fb4-45c0584cfb15",
    "name": "JoyentSDC-192.168.24.0/21",
    "public": false
}

Note: Instances can be provisioned with a public network by selecting it at provision time via the web portal or with the triton CLI tool. For Docker instances, you can be give public network access using the -p or -P flag.

To view all available networks, run triton network list.

Yes. You can choose the networks you want to connect your instance to when you create it, and add or remove network connections (NICs) while the instance is running.

By default, each instance will be connected to a private fabric network. It is also possible to connect instances to a public network. Exactly what type of private network and whether or not the instance gets a public network depends on the instance type.

Private fabric networks are a good choice for connecting the components of your application, since their isolation from the public internet and other users in the data center (for user-defined networks) can improve the security of those application components. For example, databases are typically connected just to other application components in the data center and not exposed on the public internet.

  • Docker

    • Defaults to your default fabric network

    • Can specify one or more different fabric networks when using triton-docker run

    • NICs and networks can be added and removed after the instance is started[^2]

    • Can get an interface and IP on the operator-defined "public" network by using the -p argument to triton-docker run

  • Infrastructure containers and VMs

    • Defaults to the operator-defined shared private network

    • Can get one or more user-defined fabric networks at start time

    • NICs and networks can be added and removed after the instance is started

    • Gets an interface and IP on the operator-defined "public" network by default

MNX user-defined networks (also called "fabrics" and "overlay networks") are built using VXLAN and 802.1Q industry standards. Check out the docs.

Every account in MNX starts with a private user-defined network named "default", which is the default network for Docker containers and an optional network for other instances. To list the networks available, go to the MNX portalNetworks or use the Triton CLI command triton networks.

Your container may also be connected to a public network, reachable over the internet. The IP address the public network is given varies based on the data center. By default, all containers and VM instances are given public VNICs. Docker containers do not have a public VNIC unless you request it with the -p or -P argument to triton-docker run ....

Docker containers on MNX only get interfaces and IP addresses on the public internet if you request one with the -p or -P argument to triton-docker run ....

Infrastructure containers and hardware virtual machines get public IP addresses by default.

Use the instructions above to find the IP address(es) for your instances.

Applications often have many components or services, only a small portion of which should be exposed on the public internet. You want certain components such as a load balancer and the front-end design to be easily seen by the user. However, databases and certain back-end components should most often be hidden from the public for the safety of your application.

Public IP addresses are optional. They're on by default for infrastructure containers and hardware VMs, but you can create containers without them if you want. For Docker containers, they're off by default, and you have to explicitly ask for a public IP address using the -p or -P argument in your triton-docker run....

Firewalls can help protect your instances from network attacks by blocking (or allowing) traffic based on a set of rules you can define. This can be especially valuable for protecting instances on public or shared networks. MNX Cloud Firewall makes firewall management easy. In some cases, it's even automatic!

MNX Cloud Firewall can automatically apply firewall rules based on instance tags or Docker labels, making it easy to apply or change firewall policies.

And, for Docker instances, MNX Cloud Firewall will automatically set rules that block traffic to all the ports on a public network except those specified in the -p argument in your triton-docker run....

You can modify these rules in your terminal with triton fwrule

Last updated