Oracle drg

Oracle drg DEFAULT

Aviatrix Gateway to Oracle DRG¶

Overview¶

This document describes how to configure an IPSec tunnel between an Aviatrix Gateway and an Oracle Dynamic Routing Gateway (DRG).

gw2drg

Deployment Guide¶

For this use case, we will create an IPSec connection from DRG first and then configure a Site2Cloud connection at the Aviatrix Controller.

Create an IPSec Connection from DRG¶

Note

Prerequisites

  1. You have a DRG created and attached to a VCN
  2. You have an Aviatrix Gateway provisioned in a VPC. You will need this gateway’s public IP address and its VPC CIDR for the steps below.
  1. Log in to your Oracle Cloud Console and create a route rule for the DRG.

    We need to modify the desired route table and create a route rule to take any traffic destined for the Aviatrix Gateway’s VPC CIDR and route it to the DRG.

    1. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks

    2. Click your VCN

    3. Select the desired route table(s) for your VCN

    4. Click Edit Route Rules

    5. Create a new route rule as following and save it

      FieldDescription
      Target TypeDynamic Route Gateway
      Destination CIDR BlockAviatrix GW’s VPC CIDR (172.19.0.0/16 in this example)
      Target Dynamic Routing GatewaySelect the desired existing DRG

    vcn_route_table

  2. Log in to your Oracle Cloud Console and create security rules.

    We will edit the security list associated with your VCN subnets. We need to add two new rules - one ingress rule for traffic coming from the Aviatrix Gateway’s VPC and one egress rule for traffic destinating to the Aviatrix Gateway’s VPC.

    1. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks

    2. Click your VCN

    3. Select the desired security list(s) associated with your subnets

    4. Click Edit All Rules

    5. In Allowed Rule for Ingress section, enter the following values to create a rule to allow incoming traffic from Aviatrix Gateway’s VPC

      FieldDescription
      Source TypeCIDR
      Source CIDRAviatrix GW’s VPC CIDR (172.19.0.0/16 in this example)
      IP ProtocolsAll Protocols

      vcn_security_rule_ingress

    6. In Allowed Rule for Egress section, enter the following values to create a rule to allow outgoing traffic to the Aviatrix Gateway’s VPC

      FieldDescription
      Destination TypeCIDR
      Destination CIDRAviatrix GW’s VPC CIDR (172.19.0.0/16 in this example)
      IP ProtocolsAll Protocols

      vcn_security_rule_egress

  3. Create a CPE object

    In this task, we create the CPE object, which is a logical representation of the Aviatrix Gateway.

    1. Under Core Infrastructure, go to Networking and click Customer-Premises Equipment

    2. Click Create Customer-Premises Equipment

    3. Enter the following values and click the Create button

      FieldDescription
      Create in CompartmentLeave as is (the VCN’s compartment)
      NameA descriptive name for the CPE object
      IP AddressPublic IP address of Aviatrix Gateway
      TagsOptional

      cpe

  4. From the DRG, create an IPSec connection to the CPE object

    1. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateways

    2. Click the DRG created earlier

    3. Click Create IPSec Connection

    4. Enter the following values and click Create IPSec Connection button

      FieldDescription
      Create in CompartmentLeave as is (the VCN’s compartment)
      NameA descriptive name for the IPSec connection
      Customer-Premises Equipment CompartmentLeave as is (the VCN’s compartment)
      Customer-Premises EquipmentSelect the CPE object created earlier
      Static Route CIDRAviatrix GW’s VPC CIDR (172.19.0.0/16 in this example)
      TagsOptional

      ipsec_connection

    5. Once the IPSec connection enters the Available state, click the Action icon (three dots), and then click Tunnel Information. Please copy the IP Address of the VPN headend and the Shared Secret for configuring a Site2Cloud connection at the Aviatrix Controller.

      ipsec_info

  5. Login to Aviatrix Controller

  6. Follow the steps in this guide. Use this table for specific field values.

    FieldDescription
    VPC ID/VNet NameSelect the Aviatrix Gateway’s VPC
    Connection TypeUnmapped
    Connection NameA descriptive name for the site2cloud connection
    Remote Gateway TypeOracle
    Tunnel TypeUDP
    Encryption over ExpressRoute/ DirectConnectUnchecked
    Enable HAUnchecked
    Primary Cloud GatewaySelect the desired Aviatrix Gateway
    Remote Gateway IP AddressEnter the IP Address copied from Oracle IPSec connection
    Pre-shared KeyEnter the Shared Secret copied from Oracle IPSec connection
    Remote SubnetEnter Oracle VCN’s CIDR (10.1.1.0/24 in this example)
    Local SubnetEnter Aviatrix Gateway’s VPC CIDR (Or leave it blank)

    s2c_config

Test¶

Once complete, test the communication using the tunnel by sending traffic between instances in the Aviatrix Gateway’s VPC and Oracle VCN.

Login to the Aviatrix Controller and go to the Site2Cloud page. Verify that the Site2Cloud connection created above is “Up” in Status.

s2c_status

Troubleshooting¶

Wait 2-3 minutes for the tunnel to come up. If it does not come up within that time, check the IP addresses to confirm they are accurate. Additional troubleshooting is available in the Diagnostics tab.

Appendix: Enable HA¶

You can enable HA for an Aviatrix Site2Cloud connection to Oracle DRG. Please add following extra steps to the configuration.

gw2drg-ha

Create Aviatrix HA Gateway¶

Before creating a Site2Cloud connection, follow this guide’s Backup Gateway and Tunnel HA section to create Aviatrix HA gateway in the same VPC.

From Oracle Cloud console, create a second IPSec connection between the same DRG and Aviatrix HA Gateway¶

  1. Create a new CPE at Oracle Cloud Console for the Aviatrix HA Gateway:

    FieldDescription
    Create in CompartmentLeave as is (the VCN’s compartment)
    NameA descriptive name for the second CPE object
    IP AddressPublic IP address of Aviatrix HA Gateway
    TagsOptional
  2. Create a new IPSec connection at Oracle Cloud Console for the Aviatrix HA Gateway:

    FieldDescription
    Create in CompartmentLeave as is (the VCN’s compartment)
    NameA descriptive name for the second IPSec connection
    Customer-Premises Equipment CompartmentLeave as is (the VCN’s compartment)
    Customer-Premises EquipmentSelect the second CPE object created earlier
    Static Route CIDRAviatrix GW’s VPC CIDR (172.19.0.0/16 in this example)
    TagsOptional
  3. Once the second IPSec connection enters the Available state, click the Action icon (three dots), and then click Tunnel Information. Please copy the IP Address of the VPN headend and the Shared Secret.

Create Aviatrix Site2Cloud Connection with HA¶

From the Aviatrix Controller UI -> Site2Cloud page, click + Add New. Under Add a New Connection, make sure Enable HA is checked.

Additional fields are displayed when checked. All other fields should have the same values as corresponding ones WITHOUT HA.

FieldDescription
Backup GatewaySelect the Aviatrix HA Gateway just created
Remote Gateway IP Address(Backup)Enter the IP Address copied from the second IPSec connection
Pre-shared Key(Backup)Enter the Shared Secret copied from the second IPSec connection

© Copyright 2021, Aviatrix Systems, Inc Revision .

Built with Sphinx using a theme provided by Read the Docs.
Sours: https://docs.aviatrix.com/HowTos/site2cloud_oracledrg.html

Peering VCNs in different regions through a DRG

This topic is about remote VCN peering. In this case, remote means that the virtual cloud networks (VCNs) are each attached to a different dynamic routing gateway  (DRG) that resides in a different region. If the VCNs you want to connect are in the same region, see Peering VCNs in the same region through a DRG.

This scenario is available to an upgraded or legacy DRG, though legacy DRGs will not support connecting DRGs in different tenancies.

Before you attempt to implement this scenario, ensure that:

  • VCN A is attached to DRG A in region 1
  • VCN B is attached to DRG B in region 2
  • Routing configuration in both DRGs is unchanged
  • Appropriate IAM permissions are applied for VCNs that are either in the same or different tenancies

Overview of Remote VCN Peering

Remote VCN peering is the process of connecting two VCNs in different regions. The peering allows the VCNs' resources to communicate using private IP addresses without routing the traffic over the internet or through your on-premises network. The VCNs can be in the same Oracle Cloud Infrastructuretenancy  or different ones. Without peering, a given VCN would need an internet gateway and public IP addresses for the instances that need to communicate with another VCN in a different region.

Summary of Networking Components for Remote Peering

At a high level, the Networking service components required for a remote peering include:

  • Two VCNs with non-overlapping CIDRs, in different regions.

    Note

    No VCN CIDRs can overlap

    The two VCNs in the peering relationship cannot have overlapping CIDRs. Also, if a particular VCN has multiple peering relationships, those other VCNs must not have overlapping CIDRs with each other. For example, if VCN-1 is peered with VCN-2 and also VCN-3, then VCN-2 and VCN-3 must not have overlapping CIDRs.

    If you are configuring this scenario, you have to meet this requirement in the planning stage. Routing problems are likely when overlapping CIDRs occur, but you aren't prevented by the Console or API operations from creating a configuration that causes issues.

  • A dynamic routing gateway (DRG) attached to each VCN in the peering relationship. Your VCN already has a DRG if you're using a Site-to-Site VPN IPSec tunnel or an Oracle Cloud Infrastructure FastConnect private virtual circuit.
  • A remote peering connection (RPC) on each DRG in the peering relationship.
  • A connection between those two RPCs.
  • Supporting route rules to enable traffic to flow over the connection, and only to and from select subnets in the respective VCNs (if wanted).
  • Supporting security rules to control the types of traffic allowed to and from the instances in the subnets that need to communicate with the other VCN.

The following diagram illustrates the components.

This image shows the basic layout of two VCNs that are remotely peered, each with a remote peering connection on the DRG

Note

A given VCN can use the connected RPCs to reach only VNICs in the other VCN, or your on-premises network if the DRG has a connection to an on-premises CPE. For example, if VCN-1 in the preceding diagram were to have an internet gateway, the instances in VCN-2 could NOT use it to send traffic to endpoints on the internet. For more information, see Important Implications of Peering.

Important Implications of Peering

If you haven't yet, read Important Implications of Peering to understand important access control, security, and performance implications for peered VCNs.

Peering VCNs in different tenancies has some permissions complications that need to be resolved in both tenancies. See IAM policies related to DRG peering for details on the permissions needed.

Spoke-to-Spoke: Remote Peering with Transit Routing (Legacy DRGs Only)

Note

The scenario this section mentions is still supported, but Oracle recommends you use the DRG transit routing method described in Using a DRG to route traffic through a centralized network virtual appliance.

Imagine that in each region you have multiple VCNs in a hub-and-spoke layout, as shown in the following diagram. This type of layout within a region is discussed in detail in Transit Routing inside a hub VCN. The spoke VCNs in a given region are locally peered with the hub VCN in the same region, using local peering gateways .

You can set up remote peering between the two hub VCNs. You can then also set up transit routing for the hub VCN's DRG and LPGs, as discussed in Transit Routing inside a hub VCN. This setup allows a spoke VCN in one region to communicate with one or more spoke VCNs in the other region without needing a remote peering connection directly between those VCNs.

For example, you could configure routing so that resources in VCN-1-A could communicate with resources in VCN-2-A and VCN-2-B by way of the hub VCNs. That way, VCN 1-A is not required to have a separate remote peering with each of the spoke VCNs in the other region. You could also set up routing so that VCN-1-B could communicate with the spoke VCNs in region 2, without needing its own remote peerings to them.

This image shows the basic layout of two regions with VCNs in a hub-and-spoke layout, with remote peering between the hub VCNs.

Spoke-to-Spoke: Remote Peering with Transit Routing (Upgraded DRG)

Note

The scenario this section uses the DRG transit routing method described in Using a DRG to route traffic through a centralized network virtual appliance.

Imagine that in each region you have multiple VCNs in a hub-and-spoke layout, as shown in the following diagram. This type of layout within a region is discussed in detail in Transit Routing inside a hub VCN. The spoke VCNs in a given region are locally peered with the hub DRG/VCN pair in the same region by mutual connection to the DRG.

You can set up remote peering between the two hub VCNs. You can then also set up transit routing for the hub VCN's DRG, as discussed in Using a DRG to route traffic through a centralized network virtual appliance. This setup allows a spoke VCN in one region to communicate with one or more spoke VCNs in the other region without needing a remote peering connection directly between those VCNs.

For example, you could configure routing so that resources in VCN-1-A could communicate with resources in VCN-2-A and VCN-2-B by way of the hub VCNs. That way, VCN 1-A is not required to have a separate remote peering with each of the spoke VCNs in the other region. You could also set up routing so that VCN-1-B could communicate with the spoke VCNs in region 2, without needing its own remote peerings to them.

This image shows the basic layout of two regions with VCNs in a hub-and-spoke layout, with remote peering between the hub VCNs.

Important Remote Peering Concepts

The following concepts help you understand the basics of VCN peering and how to establish a remote peering.

PEERING
A peering is a single peering relationship between two VCNs. Example: If VCN-1 peers with two other VCNs, two peerings exist. The remote part of remote peering indicates that the VCNs are in different regions. For this method of remote peering, the VCNs can be in the same tenancy or in different tenancies.
VCN ADMINISTRATORS
In general, VCN peering can occur only if both of the VCN administrators agree to it. In practice, the two administrators must:
  • Share some basic information with each other.
  • Coordinate to set up the required Oracle Cloud Infrastructure Identity and Access Management policies to enable the peering.
  • Configure their VCNs for the peering.
Depending on the situation, a single administrator might be responsible for both VCNs and the related policies. The VCNs can be in the same tenancy or in different tenancies.
For more information about the required policies and VCN configuration, see IAM policies related to DRG peering.
ACCEPTOR AND REQUESTOR
To implement the IAM policies required for peering, the two VCN administrators must designate one administrator as the requestor and the other as the acceptor. The requestor must be the one to initiate the request to connect the two RPCs. In turn, the acceptor must create a particular IAM policy that gives the requestor permission to connect to RPCs in the acceptor's compartment . Without that policy, the requestor's request to connect fails.
REGION SUBSCRIPTION
To peer with a VCN in another region, your tenancy must first be subscribed to that region. For information about subscribing, see Managing Regions.
REMOTE PEERING CONNECTION (RPC)
A remote peering connection (RPC) is a component you create on the DRG attached to your VCN. The RPC's job is to act as a connection point for a remotely peered VCN. As part of configuring the VCNs, each administrator must create an RPC for the DRG on their VCN. A given DRG must have a separate RPC for each remote peering it establishes for the VCN (maximum 300 RPCs per tenancy). To continue with the previous example: the DRG on VCN-1 would have two RPCs to peer with two other VCNs. In the API, a RemotePeeringConnection is an object that contains information about the peering. You can't reuse an RPC to later establish another peering with it.
CONNECTION BETWEEN TWO RPCS
When the requestor initiates the request to peer (in the Console or API), they're effectively asking to connect the two RPCs. The requestor must have information to identify each RPC (such as the RPC's region and OCID ).
Either VCN administrator can terminate a peering by deleting their RPC. In that case, the other RPC's status switches to REVOKED. The administrator could instead render the connection non-functional by removing the route rules that enable traffic to flow across the connection (see the next section).
ROUTING TO THE DRG
As part of configuring the VCNs, each administrator must update the VCN's routing to enable traffic to flow between the VCNs. For each subnet that needs to communicate with the other VCN, you update the subnet's route table. The route rule specifies the destination traffic's CIDR and your DRG as the target. Your DRG routes traffic that matches that rule to the other DRG, which in turn routes the traffic to the next hop in the other VCN.
In the following diagram, VCN-1 and VCN-2 are peered. Traffic from an instance in Subnet A (10.0.0.15) destined for an instance in VCN-2 (192.168.0.15) is routed to DRG-1 based on the rule in Subnet A's route table. From there the traffic is routed through the RPCs to DRG-2, and then from there, on to the destination in Subnet X.
This image shows the route tables and path of traffic routed from one DRG to the other.
Note

As mentioned earlier, a given VCN can use the connected RPCs to reach only VNICs in the other VCN or your on-premises network, and not destinations on the internet. For example, in the preceding diagram, VCN-2 cannot use the internet gateway attached to VCN-1.

SECURITY RULES
Each subnet in a VCN has one or more security lists that control traffic in and out of the subnet's VNICs at the packet level. You can use security lists to control the type of traffic allowed with the other VCN. As part of configuring the VCNs, each administrator must determine which subnets in their own VCN need to communicate with VNICs in the other VCN and update their subnet's security lists accordingly.
If you use network security groups (NSGs) to implement security rules, notice that you can write security rules for an NSG that specify another NSG as the source or destination of traffic. However, the two NSGs must belong to the same VCN.

Important Implications of Peering

If you haven't yet, read Important Implications of Peering to understand important access control, security, and performance implications for peered VCNs.

Setting Up a Remote Peering

This section covers the general process for setting up a peering between two VCNs in different regions.

Important

The following procedure assumes that:

Overview of required steps:

  1. Create the RPCs: Each VCN administrator creates an RPC for their own VCN's DRG.
  2. Share information: The administrators share the basic required information.
  3. Establish the connection: The requestor connects the two RPCs (see Important Remote Peering Concepts for the definition of the requestor and acceptor).
  4. Update route tables: Each administrator updates their VCN's route tables to enable traffic between the peered VCNs as intended.
  5. Update security rules: Each administrator updates their VCN's security rules to enable traffic between the peered VCNs as intended.

If you want, the administrators can perform tasks 4 and 5 before establishing the connection. Each administrator needs to know the CIDR block or specific subnets from the other's VCN and share that in task 2.

Task A: Create the RPCs

Each administrator creates an RPC for their own VCN's DRG. "You" in the following procedure means an administrator (either the acceptor or requestor).

Note

Required IAM Policy to Create RPCs

If the administrators already have broad network administrator permissions (see Let network admins manage a cloud network), then they have permission to create, update, and delete RPCs. Otherwise, here's an example policy giving the necessary permissions to a group called RPCAdmins. The second statement is required because creating an RPC affects the DRG it belongs to, so the administrator must have permission to manage DRGs.

  1. In the Console, confirm you're viewing the compartment that contains the DRG that you want to add the RPC to. For information about compartments and access control, see Access Control.
  2. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  3. Click the DRG you're interested in.
  4. Under Resources, click Remote Peering Connections.
  5. Click Create Remote Peering Connection.
  6. Enter the following:

    • Name: A friendly name for the RPC. It doesn't have to be unique, and it cannot be changed later in the Console (but you can change it with the API). Avoid entering confidential information.
    • Create in compartment: The compartment where you want to create the RPC, if different from the compartment you're currently working in.
  7. Click Create Remote Peering Connection.

    The RPC is then created and displayed on the Remote Peering Connections page in the compartment you chose.

  8. If you're the acceptor, record the RPC's region and OCID to later give to the requestor.
  9. If the two VCNs are in different tenancies, record your tenancy OCID (found on the bottom of the page in the Console). Give the OCID to the other administrator later.
Task B: Share information
  • If you're the acceptor, give this information to the requestor (for example, by email or other out-of-band method):

    • The region your VCN is in (the requestor's tenancy must be subscribed to this region).
    • Your RPC's OCID.
    • The CIDR blocks for subnets in your VCN you want available to the other VCN. The requestor needs this information when setting up routing for the requestor VCN.
    • If the VCNs are in different tenancies: the OCID for your tenancy.
  • If you're the requestor, give this information to the acceptor:

    • The region your VCN is in (the acceptor's tenancy must be subscribed to this region).
    • If the VCNs are in the same tenancy: The name of the IAM group that will be granted permission to create a connection in the acceptor's compartment (in the example in the next task, the group is RequestorGrp).
    • If the VCNs are in different tenancies: the OCID for your tenancy.
    • The CIDR blocks for subnets in your VCN you want available to the other VCN. The acceptor needs this information when setting up routing for the acceptor VCN.
Task C: Establish the connection

The requestor must perform this task.

Prerequisite: The requestor must have:

  • The region the acceptor's VCN is in (the requestor's tenancy must be subscribed to the region).
  • The OCID of the acceptor's RPC.
  • The OCID of the acceptor's tenancy (only if the acceptor's VCN is in a different tenancy)
  1. In the Console, view the details for the requestor RPC that you want to connect to the acceptor RPC.
  2. Click Establish Connection.
  3. Enter the following:

    • Region: The region that contains the acceptor's VCN. The list includes only regions that both support remote VCN peering and that your tenancy is subscribed to.
    • Remote Peering Connection OCID: The OCID of the acceptor's RPC.
  4. Click Establish Connection.

The connection is established and the RPC's state changes to PEERED.

Task D: Configure the route tables

As mentioned earlier, each administrator can do this task before or after the connection is established.

Prerequisite: Each administrator must have the CIDR block or specific subnets for the other VCN.

For your own VCN:

  1. Determine which subnets in your VCN need to communicate with the other VCN.
  2. Update the route table for each of those subnets to include a new rule that directs traffic destined for the other VCN to your DRG:

    1. Open the navigation menu, click Networking, and then click Virtual Cloud Networks.

    2. Click the VCN you're interested in.
    3. Under Resources, click Route Tables.
    4. Click the route table you're interested in.
    5. Click Add Route Rule and enter the following:

      • Target Type: Dynamic Routing Gateway. The VCN's attached DRG is automatically selected as the target, and you don't have to specify the target yourself.
      • Destination CIDR Block: The other VCN's CIDR block. If you want, you can specify a subnet or particular subset of the peered VCN's CIDR.
      • Description: An optional description of the rule.
    6. Click Add Route Rule.

Any subnet traffic with a destination that matches the rule is routed to your DRG. For more information about setting up route rules, see VCN Route Tables.

Tip

Without the required routing, traffic doesn't flow between the peered DRGs. If a situation occurs where you need to temporarily stop the peering, you can simply remove the route rules that enable traffic. You don't need to delete the RPCs.

Task E: Configure the security rules

As mentioned earlier, each administrator can do this task before or after the connection is established.

Prerequisite: Each administrator must have the CIDR block or specific subnets for the other VCN. In general, use the same CIDR block you used in the route table rule in Task E: Configure the route tables.

What rules should you add?

  • Ingress rules for the types of traffic you want to allow from the other VCN, specifically from the VCN's CIDR or specific subnets.
  • Egress rule to allow outgoing traffic from your VCN to the other VCN. If the subnet already has a broad egress rule for all types of protocols to all destinations (0.0.0.0/0), then you don't need to add a special one for the other VCN.

Note

The following procedure uses security lists, but you could instead implement the security rules in a network security group and then create the subnet's resources in that NSG.

For your own VCN:

  1. Determine which subnets in your VCN need to communicate with the other VCN.
  2. Update the security list for each of those subnets to include rules to allow the intended egress or ingress traffic specifically with the CIDR block or subnet of the other VCN:

    1. In the Console, while viewing the VCN you're interested in, click Security Lists.
    2. Click the security list you're interested in.
    3. Under Resources, click either Ingress Rules or Egress Rules depending on the type of rule you want to work with.
    4. If you want to add a rule, click Add Ingress Rule (or Add Egress Rule).

    5. If you want to delete an existing rule, click the Actions menu, and then click Remove.
    6. If you wanted to edit an existing rule, click the Actions menu, and then click Edit.

For more information about security rules, see Security Rules.

Sours: https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/scenario_e.htm
  1. Storage drawers fabric
  2. Maine outdoor properties
  3. Judges 12 nkjv
  4. Medical supply tarzana

Using a DRG to route traffic through a centralized network virtual appliance

The three primary transit routing scenarios are:

  • Access between multiple networks through a single DRG with a firewall between networks: The scenario covered in this topic. This scenario uses the DRG as the hub, with routing configured to send packets through a firewall instance in a dedicated virtual cloud network (VCN) before they can be sent to another network.
  • Access to multiple VCNs in the same region: This scenario enables communication between your on-premises network and multiple VCNs in the same region over a single FastConnect private virtual circuit or Site-to-Site VPN, with a VCN as the hub. See Transit Routing inside a hub VCN
  • Private access to Oracle services: This scenario gives your on-premises network private access to Oracle services with a VCN as the hub, so your on-premises hosts can use their private IP addresses and traffic does not go over the internet. See Private Access to Oracle Services.

Highlights

  • You can use FastConnect or Site-to-Site VPN to connect your on-premises network with multiple VCNs in the same region or in another region, in a hub-and-spoke topology.
  • When the dynamic routing gateway (DRG)  acts as the hub, all VCNs can be in different regions or tenancies. For accurate routing, the CIDR blocks of the various subnets accessible to the on-premises network and other connected VCNs must not overlap.
  • A dynamic routing gateway can act as the hub to communicate between VCNs or with the on-premises network. This DRG has attachments for peering connections to VCNs (referred to as spoke VCNs in this topic).
  • To enable the intended traffic between the attached networks through the DRG and a firewall VCN to a peered spoke VCN, create route rules for the DRG, the firewall VCN, or the firewall VCN's DRG attachment, for the spoke VCNs, and for the spoke VCN's subnets.
  • You can set up transit routing through a private IP in the firewall VCN. For example, you might want to filter or inspect the traffic between the on-premises network and a spoke VCN. In that case, you route the traffic to a private IP on an instance in the firewall VCN for inspection, and the resulting traffic continues to its destination. This topic covers both situations: transit routing directly between gateways on the firewall VCN, and transit routing through a private IP.
  • By configuring route tables, you can control whether a particular subnet in a peered spoke VCN is advertised to the on-premises network.

Tip

There's another scenario that lets you connect your on-premises network to multiple VCNs. Instead of using a single DRG and hub-and-spoke topology, you set up a separate DRG for each VCN and a separate private virtual circuit over a single FastConnect. However, the scenario can be used only with FastConnect through a third-party provider or through colocation with Oracle. The VCNs must be in the same region and same tenancy. For more information, see FastConnect with Multiple DRGs and VCNs.

Overview of Transit Routing through a Private IP

Transit routing is simply routing traffic to either a VCN or an on-premises network through a central hub VCN. Here's a basic example of why you might use transit routing: you have a large organization with different departments, each with their own VCN. Each VCN needs access to the other VCNs, but you want to ensure security by sending all traffic through a virtual network appliance running a firewall.

Note

A hub is a logical concept in a hub-and-spoke topology. If you want spokes to communicate directly to each other, the hub can be just a DRG. If you want all spoke-to-spoke traffic to pass through a firewall, the hub is the combination of the DRG and the firewall VCN.

This networking scenario optionally involves connecting your on-premises network to a VCN with either Oracle Cloud InfrastructureFastConnect or Site-to-Site VPN. These two basic scenarios illustrate that topology: Scenario B: Private Subnet with a VPN and Scenario C: Public and Private Subnets with a VPN.

This scenario uses a hub-and-spoke topology, as illustrated in the following diagram. The term hub here means only that a VCN has a firewall that must be routed through when one spoke communicates with another spoke in this hub-and-spoke design. The on-premises network connection shown in the diagram is not covered in the detailed steps that follow, it is shown for reference.

DRG transit routing with a firewall VCN

Use this scenario if you want to create a hub-and-spoke topology and route all traffic between spokes through a firewall device in the hub. All VCNs are in the same region, and connect to a DRG in that region, but they could be in different regions or in different tenancies. The on-premises network shown is optional, and could be a VCN in another region or tenancy. In this scenario, traffic is sent from an on-premises network to the DRG and then to the firewall in VCN-Fire, then back to the DRG to be routed to VCN-B. Similarly, traffic sent from VCN-A is first routed by the DRG to VCN-Fire and then to VCN-C.

Summary of New Concepts for Experienced Networking Service Users

If you're already familiar with the Networking service and local peering, the most important new concepts to understand are:

  • For each spoke VCN subnet that needs to communicate with another network attached to the DRG, update the subnet's route table with a rule that sets the target for all traffic (the next hop) as the DRG.
  • Add a DRG route table for the firewall VCN attachment, associate it with the VCN attachment (inside the DRG), and add a route rule with a target that depends on your situation:

  • Add another route table to the hub, VCN-Fire, associate it with the firewall VCN's attachment to the DRG (for that spoke), and add a route rule with a target that depends on your situation:

    • Routing traffic to another network: Set the target (the next hop) as the DRG for all traffic destined to another VCN or for the on-premises network (or a specific subnet in that network).

Before you begin

Before you attempt to implement this scenario, ensure that:

  1. VCN-A, VCN-B, and VCN-C (the "spoke" VCNs) are all already created, none of which are attached to a DRG.
  2. VCN-Fire is already created and its subnet Subnet-H has a compute instance with a private IPv4 address running firewall software. This VCN is not yet attached to any DRG.
  3. All VCNs in the scenario have non-overlapping CIDRs.
  4. The on-premises network is connected to DRG with FastConnect prereq
  5. All necessary IAM policies are in already in place. See IAM policies related to DRG peering for details.

Process summary

Configuring transit routing involves these steps:

  1. Create a DRG named DRG-Transit.
  2. Attach spoke VCNs VCN-A, VCN-B, and VCN-C to DRG-Transit.
  3. Attach VCN-Fire to DRG-Transit.
  4. Create a route table named "To-Firewall" in DRG-Transit with a single static rule sending all traffic to the VCN-Fire's attachment.
  5. Change the DRG route table used by the spoke VCN attachments to "To-Firewall."
  6. Create an import DRG route distribution in DRG-Transit called Import_Spoke_Routes with three statements, each importing routes from the VCN attachments used by VCN-A, VCN-B, and VCN-C.
  7. Create a DRG route table named "From-Firewall" in DRG-Transit and specify its import route distribution to Import_Spoke_Routes.
  8. Update the DRG route table of VCN-Fire's attachment to use the "From-Firewall" route table.
  9. Configure VCN-Fire's default route table to send all incoming traffic to the firewall instance.
  10. Configure Subnet-H to send all traffic destined to addresses in the VCN CIDRs of VCN-A, VCN-B, and VCN-C to the DRG attachment.

Turning off transit routing

To turn off transit routing, remove the rules from:

  • The route table associated with the DRG attachment.
  • The route table on the firewall VCN.

A route table can be associated with a resource but have no rules. Without at least one rule, a route table does nothing.

A DRG attachment or LPG can exist without a route table associated with it. However, after you associate a route table with a DRG attachment or LPG, there must always be a route table associated with it. But, you can associate a different route table. You can also edit the table's rules, or delete some or all rules.

Example: Transit routing with a DRG hub and a firewall in an attached VCN

The examples in this section show a DRG acting as a hub and an attached VCN with a firewall, you can configure as many spoke VCNs as necessary by repeating Task 2: Attach the spoke VCNs. The FastConnect link in the diagram is not covered in the detailed steps that follow, it is shown for reference.

Diagram showing route tables for transit-routing enabled DRG and a hub VCN
Task 1: Create DRG-Hub

Create the DRG (named DRG-Transit) that routes traffic between all attached VCNs.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Choose a compartment you have permission to work in (on the left side of the page). The page updates to display only the resources in that compartment. If you're not sure which compartment to use, contact an administrator. For more information, see Access Control.
  3. Click Create Dynamic Routing Gateway.
  4. Enter the following items:

    • Name: DRG-Transit
    • Create in Compartment: The compartment where you want to create the DRG, which could be different from the compartment you're currently working in.
  5. Click Create Dynamic Routing Gateway.

The new DRG is created and then displayed on the Dynamic Routing Gateways page of the compartment you chose. The DRG is in the "Provisioning" state for a short period. You can connect it to other parts of your network only after provisioning is complete.

Provisioning a DRG includes creating two default route tables: one DRG route table for VCN attachments and one DRG route table for all other resources such as virtual circuits and IPSec tunnels. These route tables are used to route traffic coming into the DRG.

Task 2: Attach the spoke VCNs

Attach VCN-A, VCN-B, and VCN-C to DRG-Transit.

Note

The VCN subnet route tables sending traffic to the DRG attachment need to account for the CIDRs of the other two VCNs.

Note

A DRG can be attached to many VCNs, but VCN can be attached to only one DRG at a time. The attachment is automatically created in the compartment that holds the VCN. A VCN does not need to be in the same compartment as the DRG.

You can eliminate local peering connections from your overall network design if you connect several VCNs in the same region to the same DRG and configure the DRG routing tables appropriately.

The following instructions have you navigate to the DRG and then choose which VCN to attach. Repeat this task for all three VCNs (VCN-A, VCN-B, and VCN-C), and create a different DRG attachment for each VCN.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you want to attach to VCN A, DRG-Transit.
  3. Under Resources, click Virtual cloud network attachments.
  4. Click Create VCN attachment.
    • (Optional) Enter Spoke-A, or give the attachment point some other descriptive name. If you don't specify a name, one is created for you. are suggested.
    • Select VCN-A from the list.
  5. Click Create VCN attachment.

The attachment is in the "Attaching" state for a short period. Each of the spoke VCNs get a unique attachment.

Once you have done this for all three VCNs (VCN-A, VCN-B, and VCN-C) you have direct routing between these VCNs.

Task 3: Attach the firewall VCN

Attach VCN-Fire to DRG-Transit.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you want to attach to a VCN, in this case DRG-Transit.
  3. Under Resources, click Virtual cloud network attachments.
  4. Click Create VCN attachment.
    • (Optional) Enter Firewall-Attach or give the attachment point some other descriptive name. If you don't specify a name, one is created for you.
    • Select VCN-Fire from the list of VCNs.
  5. Click Create VCN attachment.

The attachment is in the "Attaching" state for a short period. The VCN attachment uses the default DRG route table for VCNs. Wait for the attachment to complete before moving on.

Task 4: Create the DRG route table sending ingress traffic to the firewall

Create a DRG route table named "To-Firewall" in DRG-Transit with a single static rule sending all traffic to the VCN-Fire's attachment.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you're interested in, DRG-Transit.
  3. Under Resources, click DRG Route Tables.
  4. Click Create DRG Route Table.
  5. Enter the following:

    • Name: Enter TO-FIREWALL, or choose some other descriptive name.
    • Destination CIDR: enter the CIDR for VCN-Fire. This example uses 0.0.0.0/0. This is a static route which sends all VCN-A, VCN-B, and VCN-C traffic to the firewall.
    • Next Hop Attachment Type: ChooseVirtual Cloud Network.
    • Next hop Attachment: Choose VCN-Fire from the list.
  6. Click Create Route Table.

Task 5: Update the route table of spoke VCN attachments

Change the DRG route table used by the spoke VCN attachments to "To-Firewall."

Change the DRG route tables used by the spoke VCN attachments (VCN-A, VCN-B, and VCN-C) to use the route table created in the previous task, which sends all incoming traffic to VCN-Fire.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you're interested in, DRG-Transit.
  3. Under Resources, click Virtual Cloud Network Attachments.
  4. Click the name of the DRG attachment used by one of the VCNs.
  5. Click Edit.
  6. Click Show Advanced Options.
  7. In the DRG route table tab, select TO-FIREWALL from the list of available route tables.
  8. Click Save Changes.

Repeat this task for all three spoke VCN attachments (VCN-A, VCN-B, and VCN-C) before proceeding to the next task.

Task 6: Create an import route distribution

In this task, you create an import route distribution in DRG-Hub with three statements, each importing routes from the VCN attachments used by VCN-A, VCN-B, and VCN-C.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you're interested in, DRG-Transit.
  3. Under Resources, click Import Route Distributions.
  4. Click Create Import Route Distribution.
  5. In the screen that appears, give the import route distribution an easily recognized name like Import_Spoke_Routes, then click + Another Statement twice. For each of the three statements, add the following details:
    • Match Type: Choose Attachment.
    • Attachment Type Filter: Choose Virtual Cloud Network.
    • DRG Attachment: Choose a VCN attachment created previously for VCN-A, VCN-B, or VCN-C.
  6. Click Create Import Route Distribution when finished.
Task 7: Create a DRG route table for ingress from firewall

Create a route table named "From-Firewall" in DRG-Hub and set its import route distribution to the distribution created previously.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you're interested in, DRG-Transit.
  3. Under Resources, click DRG Route Tables.
  4. Click Create DRG Route Table.
  5. Assign the DRG route table a name, for example FROM-FIREWALL.
  6. Click Show Advanced Options.
  7. Click Enable Import Route Distribution.
  8. Choose Import_Spoke_Routes, the import route distribution you created in Task 6.
  9. Click Create Route Table.

    The route table is created and then displayed on the Route Tables page in the compartment you chose.

Task 8: Update VCN-Fire's attachment

Update the DRG route table of VCN-Fire's attachment to use the "From-Firewall" DRG route table.

  1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  2. Click the DRG you're interested in, DRG-Transit.
  3. Under Resources, click Virtual Cloud Network Attachments.
  4. Click the name of the DRG attachment used by VCN-Fire.
  5. Click Edit.
  6. Click Show Advanced Options.
  7. In the DRG route table tab, select FROM-FIREWALL from the list of available route tables.
  8. Click Save Changes.
Task 9: Configure routing inside the firewall VCN route tables

Configure ingress routing in VCN-Fire to send all inbound traffic to the firewall instance.

  1. Open the navigation menu, click Networking, and then click Virtual Cloud Networks.

  2. Click the VCN you're interested in, VCN-Fire.
  3. Under Resources, click Route Tables.
  4. Click Create Route Table.
  5. Name the VCN route table VCN-INGRESS, and enter the following route rules:

    • Target Type Choose Private IP.
    • Destination Type Choose CIDR Block.
    • Destination CIDR Block Enter the CIDR block for VCN-A .
    • Target Selection Enter 10.0.0.10, the private IPv4 address for the firewall instance.
  6. Click +Another Route Rule and repeat until you have a rule for each of the three spoke VCNs (VCN-A, VCN-B, and VCN-C)
  7. Click Create.

    The VCN route table is created and then displayed on the Route Tables page for the VCN.

  8. Under Resources, click Dynamic Routing Gateways Attachments.
  9. Open the navigation menu. Under Core Infrastructure, go to Networking and click Dynamic Routing Gateway.

  10. Click the DRG you are interested in, DRG-Transit.
  11. Under Resources, click Virtual Cloud Network Attachments.
  12. Click the name of the DRG attachment used by VCN-Fire.
  13. Click Edit.
  14. Click Show Advanced Options.
  15. In the VCN route table tab, click Select Existing and select VCN-INGRESS from the list of available route tables.
  16. Click Save Changes.
Task 10: Configure VCN egress routing

Configure VCN egress routing in VCN-Fire's subnet named Subnet-H to send all traffic destined to addresses in the VCN CIDRs of VCN-A, VCN-B, and VCN-C to the DRG attachment.

  1. Open the navigation menu, click Networking, and then click Virtual Cloud Networks.

  2. Click the VCN you're interested in, VCN-Fire.
  3. Under Resources, click Route Tables.
  4. Click Create Route Table.
  5. Name the new VCN route table VCN-Egress. Create three new route rules. Click click +Another Route Rule twice and enter the following information for VCN-A, VCN-B, and VCN-C respectively:

    • Target Type: Choose Dynamic Routing Gateway.
    • Destination CIDR Block: Enter the CIDR block for one of the three spoke VCNs.
  6. When you've created rules for all spoke VCNs, click Create.
  7. Under Resources, click Subnets.
  8. Click Subnet-H, the name of the subnet with the firewall instance.
  9. Click Edit.
  10. Change the route table selected for the subnet to the VCN route table you created , VCN-Egress.
  11. Click Save Changes.

This completes configuration of transit routing. At this point, any packets sent from one spoke VCN to another are sent to the mutually attached DRG, redirected to a firewall in a hub VCN, and packets the firewall allows are then sent back to the DRG to be routed to their destination VCN.

Sours: https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/scenario_g.htm
Oracle Autonomous Database Overview

Dynamic Routing Gateways (DRGs)

A DRG acts as a virtual router, providing a path for traffic between your on-premises networks and VCNs, and can also be used to route traffic between VCNs. Using different types of attachments, custom network topologies can be constructed using components in different regions and tenancies. Each DRG attachment has an associated route table which is used to route packets entering the DRG to their next hop. In addition to static routes, routes from the attached networks are dynamically imported into DRG route tables using optional import route distributions.

Working with DRGs and DRG attachments

When creating a DRG, you must specify the compartment where you want the DRG to reside. Placing the DRG in a compartment helps to limit access control. If you're not sure which compartment to use, put the DRG in the same compartment as a VCN you use regularly. For more information, see Access Control.

You might optionally assign a friendly name to the DRG. It doesn't have to be unique, and you can change it later. Oracle automatically assigns the DRG a unique identifier called an Oracle Cloud ID (OCID). For more information, see Resource Identifiers.

To use a DRG, it must be attached to other network resources. In the API, the process of attaching creates a object with its own OCID. The has a type field which denotes the type of object being attached to the DRG. The type field can be set to one of the following values:

  • VCN
  • VIRTUAL_CIRCUIT
  • IPSEC_TUNNEL
  • REMOTE_PEERING_CONNECTION

To attach a VCN to a DRG, use the operation or the console to explicitly create the DRG attachment object. Attachments for virtual circuits, IPSec tunnels, and remote peering connections are created (and deleted) automatically on your behalf when you create (or delete) the network object.

Working with DRG Route Tables and Route Distributions

A packet entering a DRG is routed using rules in the DRG route table assigned to that attachment. You can assign the same route table to multiple DRG attachments or create a dedicated route table for each attachment depending on the routing policies you want.

When you create a DRG, two default route tables are created for you: one for VCN attachments and one for all other attachments. When a route table is set as the default route table for an attachment type, the table is assigned to newly created attachments of that type unless an alternate table is explicitly specified. Route tables specified as the default for any type cannot be deleted. Ensure that a route table is not currently set as a default route table for an attachment type before trying to delete it.

A VCN attachment has two route tables: One DRG routing table for traffic entering the DRG, and one VCN routing table for traffic entering the VCN. The DRG route table exists in the DRG and is used to route packets entering the DRG through the attachment. The VCN route table is used to route packets entering the VCN through the attachment. If a VCN routing table is not defined, a hidden implicit table always provides connectivity to all subnets in the VCN.

Dynamic Route Import Distributions

A distribution is a list of declarative statements that contain a match criteria (such as an OCID or an attachment type) and an action. You can use route distributions to specify how routes get imported from or exported to a DRG attachment.

DRG route tables contain both static and dynamic routes. Static routes are inserted into tables using the API, while dynamic routes are imported from attachments and inserted using an import route distribution. When a statement's criteria matches on an attachment, the routes associated with the network object being attached to the DRG are dynamically imported into the DRG route table assigned to the containing distribution. If the statement is removed from the distribution, the routes are withdrawn from the DRG route tables. Statements in a route distribution are evaluated in priority order: the lowest number has the highest priority. The order in which statements are evaluated doesn't affect the preference set for the routes they import.

When building route distribution statements in the console, you can create a statement whose match type is "Match All". In the API, encode a "match all" statement by setting the match criteria to the empty list.

How do dynamic routes arrive at an attachment?

BGP advertises dynamic routes in your on premises network from the CPE to the DRG over IPSec tunnel and virtual circuit attachments. With RPC attachments, dynamic routes are exported to the peer DRG RPC's attachment. Dynamic routes in a VCN include all the subnet CIDRs and all static route CIDRs configured on the VCN route table associated with the DRG attachment.

Dynamic Route Export Distributions

When an attachment is assigned to a DRG route table, the contents of that table can be dynamically exported to the attachment. If the default export route distribution is assigned to an attachment, the entire contents of the attachment's assigned DRG route table are dynamically exported to the attachment. If you want to disable dynamic route exports to an attachment, use the API operation to set the attachment's field to NULL. Dynamic route export to VCN attachments is not supported.

Route propagation restrictions

Routes imported from an IPSec tunnel or virtual circuit are never exported to other IPSec tunnel or virtual circuit attachments, regardless of how the export route distribution is configured. In a similar vein, packets which enter a DRG through an IPSec tunnel or virtual circuit attachment can never leave through an IPSec tunnel or virtual circuit attachment. Packets are dropped if routing is configured such that packets originating from IPSec tunnel or virtual circuit attachments are sent to IPSec tunnel or virtual circuit attachments.

ECMP

Equal-cost multi-path routing (ECMP) is a feature which allows flow-based load balancing of network traffic over multiple FastConnect virtual circuits or multiple IPSec tunnels (but not a mix of circuit types) using BGP. ECMP allows active-active load balancing and failover of network traffic between a maximum of eight circuits.

Oracle utilizes the protocol, destination IP, source IP, destination port, and source port to distinguish flows for load balancing purposes using a consistent and deterministic algorithm. Therefore, multiple flows are necessary to utilize all available bandwidth.

ECMP is off by default and can be enabled on a per-route table basis. Oracle only considers routes with identical route preference as eligible for ECMP forwarding. See Route Conflicts for more.

Route Source

DRG routes originate as either static routes or as dynamic routes from VCN, IPSec tunnel, FastConnect virtual circuit, or RPC attachments. This origin defines their source, which is an immutable characteristic of the route. In the API, the source is referred to as the of a DrgRouteRule.

Routes are propagated between DRGs using RPC attachments.

Routes with a source of IPSEC_TUNNEL or VIRTUAL_CIRCUIT are not exported to IPSec tunnel or virtual circuit attachments, regardless of the attachment's export distribution.

Routing a Subnet's Traffic to a DRG

The basic routing scenario sends traffic from a subnet in the VCN to the DRG. For example, if you're sending traffic from the subnet to your on-premises network, you set up a rule in the subnet's route table. The rule's destination CIDR is the CIDR for the on-premises network (or a subnet within), and the rule's target is the DRG. For more information, see VCN Route Tables.

Required IAM Policy

Peering VCNs using a DRG requires specific IAM permissions. See IAM policies related to DRG peering for details on the permissions needed.

DRG versions

DRGs created before May 17, 2021 use the legacy software, and can be upgraded to the most recent version. DRGs created after that have the upgraded features by default.

The following summarizes the difference between an upgraded and legacy DRG:

A legacy DRG:

  • Has no programmable route tables. It has a default routing behavior where all traffic is forwarded from on-premises to an associated VCN and from the VCN to on-premises.
  • Can attach to a single VCN. The DRG can only be used for remote VCN peering using an RPC.
  • Can attach FastConnect or Site-to-Site VPN, or both. You can only reach resources in the local region using these connections.
  • Can support an RPC connection with a remote DRG-VCN pair in the same tenancy.

An upgraded DRG:

  • Has two route tables by default, and more can be added later.
  • Can have many VCNs attached to it within the same region. Local VCN to VCN traffic can pass through a mutually connected DRG instead of an LPG.
  • Can attach to on-premises using FastConnect orSite-to-Site VPN, or both. You can reach resources in both local and remote regions using these connections.
  • Supports an RPC connection with a DRG/VCN pair in the same or another tenancy.

The rest of this article has recently been updated to reflect the capabilities of an upgraded DRG, as have the common networking scenarios.

Before you upgrade a DRG

To take advantage of enhanced DRG features, upgrade your DRG. The DRG upgrade process is automated, but you must have the required permissions to initiate an upgrade.

Upgrading a DRG is a one-way process with no option to roll back to a legacy DRG after the upgrade process has been initiated.

Expect there to be a traffic outage for all the DRG's attachments during the upgrade process. Each attachment is updated one at a time, forcing each specific attachment into a provisioning state where it will no longer forward traffic. Any existing BGP sessions for your on-premises connections (FastConnect or Site-to-Site VPN) are reset, any Site-to-Site VPN IPSec tunnels are brought offline for a short time, and any remote peering connections are briefly unavailable. For example, if your DRG has two FastConnect virtual circuit attachments, one virtual circuit is upgraded first, causing it to drop connectivity. After that update has finished, the upgrade process upgrades the second virtual circuit attachment and the completed virtual circuit is brought back online.

Expect the upgrade process to last up to 30 minutes per DRG, with each attachment taking upwards of 5 minutes.

Note

Expect a traffic outage during the upgrade process for any components attached to the DRG. Oracle recommends upgrading your DRG during a maintenance window.

After the DRG upgrade process has completed, any Site-to-Site VPN IPSec tunnels are brought back online and all BGP sessions for FastConnect and Site-to-Site VPN are re-established. By default, the upgraded DRG has two autogenerated DRG route tables and import route distributions enabled for your attachments. These resources are designed for backward compatibility with your legacy DRG and allow for all previous communication to resume in the same manner as before the upgrade without any additional user intervention.

For step-by-step instructions on how to upgrade your DRG refer to Upgrading a DRG.

Note

If the DRG upgrade process gets stuck for any reason, create a service request ticket, and mark the ticket as high severity.

Scenarios

We have provided some detailed networking scenarios to help you understand the role of a DRG in the Networking service and how the components work together in general.

Sours: https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingDRGs.htm

Drg oracle

Basic Routing Scenarios For The Enhanced DRG

In this blog post we will explore some of the new features and functionality of the recently enhanced version of the Dynamic Routing Gateway (DRG).  This updated version of the DRG is now available in all commercial OCI regions, and any newly created DRGs have the updated features by default.  Any DRGs created before June, 2021 may use the legacy version but can be upgraded to the new version.

The legacy DRG was a virtual router that you could use to route your Virtual Cloud Network (VCN) traffic to and from:

  • On premise networks via IPSec tunnels
  • On premise networks via FastConnect virtual circuits
  • Other VCNs in remote OCI regions via Remote Peering Connections (RPC)

The legacy DRG had the below limitations that have been addressed in the new DRG version:

  • VCN Attachment
    The legacy DRG could only be attached to a single VCN in the same region.  In order for customers to connect multiple VCNs in the same region, a Local Peering Gateway (LPG) would be required which was limited to 10 VCN peerings.
  • FastConnect and Site-to-Site Virtual Private Network (VPN)
    Connectivity from on premise networks via FastConnect or IPSec tunnels on the legacy DRG was limited to reaching resources in the local region where the DRG is provisioned.  Customers wishing to connect from on premise networks to resources in another region from where the DRG is provisioned were forced to setup a separate FastConnect or IPSec tunnel to that other region.
  • Legacy DRG Route Table
    Legacy DRG includes a route table that by default routes all traffic between on premise networks and the VCN it's attached to with no ability to modify that behavior.

    
New enhanced DRG version features:

  • VCN Attachment
    The new enhanced DRG now supports multiple VCN attachments within the same region.  VCN to VCN traffic inside the same region can now pass through a mutually connected DRG instead of an LPG.  This feature allows customers to now connect up to 300 local VCNs together opposed to the 10 VCN limit on the LPG.
  • FastConnect and Site-to-Site VPN
    Customers can now reach resources in a remote region from on premise networks via FastConnect or IPSec tunnels to one region.  Traffic would flow from on premise to one region over FastConnect or IPSec tunnel to the new DRG, and then routed over an RPC connection to another region over the OCI backbone.
  • Enhanced DRG Route Table
    You can now have a configurable route table for each network attachment which gives customers more granular control of routing and the ability to meet more complex routing requirements.

    
DRG Attachments, Route Tables, and Route Distributions

The new enhanced DRG can have multiple network attachments:

  • VCN Attachments
  • RPC Attachments
  • IPSec Tunnel Attachments
  • Virtual Circuit (VC) Attachments

One way to think of a DRG attachment is to think of them as interfaces that you plug into a traditional physical router.  You are basically attaching the DRG virtual router to different networks you want the DRG to connect to.

Each DRG attachment will have a route table that is applied to inbound traffic.  When a packet enters a DRG, it is routed using rules in the DRG attachment route table assigned to that attachment to make its forwarding decision.  When you create a DRG, two route tables are autogenerated for you by default.  One for VCN attachments which will apply to traffic entering the DRG from the VCN, and one for all other attachments (RPC, IPSec, VC) that will apply to traffic entering the DRG from the RPC, IPsec or VC.  These are autogenerated for you and applied to the attachments by default, but you can manually create additional route tables and assign them to attachments as well.

The DRG route tables support both static and dynamic routes.  Static routes can be added manually via the OCI console, while dynamic routes are imported and exported to and from the attachments.  The new enhanced DRG gives customers more granular control of the routes being imported  or exported using import or export route distributions.  An import route distribution is autogenerated for each of the two autogenerated route tables (one for VCN attachment and one for IPSec, VC, and RPC attachments).  By default all routes are imported into the VCN attachment route table, and all VCN routes are imported into the attachment route table for all other attachments.  There is also a default export route distribution that exports all routes that is applied to all attachment route tables.  This means by default you will not need to make any changes to the routing tables or route distributions for simple routing behavior between VCN attachments and IPSec, VC, and RPC attachments.  

In this scenario we are going to establish basic connectivity from on premise networks (192.168.0.0/16) to a private subnet in a VCN (10.0.0.0/24).  The good news is the autogenerated route tables and import/export route distributions discussed above are generated to allow for this connectivity with minimal changes.  

You must create the DRG first before you create an IPSec VPN or FastConnect, however when you create the IPSec VPN or FastConnect it is automatically attached to the DRG for you and the autogenerated route table, import and export route distributions are applied to that attachment as well.  The DRG is not automatically attached to the VCN however, so you will need to manually do that by going to the DRG and clicking Create Virtual Cloud Network Attachment under the Virtual Cloud Network Attachments resource and selecting your VCN (See below).  Don't forget to add route rules in your subnet routing table to point the on premise networks to the DRG route target and modify any Security Lists or Network Security Groups to permit the traffic.  

This scenario builds upon what we have already setup in Scenario 1, but instead of communicating between on premise networks and the private subnet in the VCN, we will be using the VCN as a transit network to route between on premise networks and resources in the Oracle Services Network (OSN).

First let's dig a little deeper on the VCN attachment to understand the routing.  The VCN attachment, unlike the other attachment types, actually has 2 routing tables.  One is the DRG routing table for traffic entering the DRG from the VCN attachment.  By default this is the autogenerated routing table discussed above which will automatically contain the dynamic routes for the VCN subnets and on premise networks it learns from the IPSec or Fast Connect attachment.  The second routing table is the VCN routing table for traffic leaving the DRG and entering the VCN through the attachment.  This routing table was not required in Scenario 1 because there is a hidden implicit routing table used by default that will always permit connectivity to all the subnets in the VCN.  However, if you want to use the VCN as a transit network as in Scenario 2, we must create this VCN route table and apply it to the DRG VCN attachment shown below to route the OSN network.  The below assumes you already have a Service Gateway for the VCN and associated a route table to the Service Gateway routing your on premise networks to the DRG.

Create a route table for the DRG VCN Attachment

  1. Click on the VCN under Networking >> Virtual Cloud Networks
  2. On the left hand side under the Resources menu, click Route Tables
  3. Click the blue button "Create Route Table"
  4. Give it a name, for example DRG_Route_Table
  5. Under Route Rules click the "+ Another Route Rule" button
  6. Select Service Gateway for target type
  7. Destination CIDR Block we will select the OSN services that you need access to from on premise,  Object Storage or All Services within that region
  8. Target Service Gateway select the Service Gateway you created 
  9. Click the blue Create button

Associate route table to the DRG VCN Attachment

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on the VCN Attachment Name
  3. Click the gray Edit button at the top
  4. Select Advanced Options
  5. Select VCN route table tab
  6. Click the Select Existing radio button
  7. Select the DRG route table you created in the above section in the drop down box.  In our example this is DRG_Route_Table

In Scenario 1 above we leveraged a VPN or FastConnect in Ashburn to connect to a VCN in the same Ashburn region. If you recall from the Introduction section above, one of the limitations of the legacy DRG was only allowing customers to connect from on premise to a single region where the VPN or FastConnect terminated in.  If a customer wanted to reach a remote region, this would require provisioning another VPN or FastConnect from on premise to the remote region. In this Scenario 3 we will establish connectivity from on premise (192.168.0.0/16) to a private subnet in a VCN in Phoenix region (172.16.0.0/24) utilizing the same VPN or FastConnect from Scenario 1 in Ashburn and the new enhanced DRG functionality.

In order to connect between two OCI regions, we will utilize a Remote Peering Connection (RPC) on the DRG.  It is assumed that the Phoenix DRG is already provisioned and attached to the Phoenix VCN.

On the Ashburn DRG:

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Remote Peering Connection Attachments on the left side under Resources
  3. Click the blue button labeled "Create Remote Peering Connection" and give it a name, for example "Phoenix RPC"

On the Phoenix DRG:

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Remote Peering Connection Attachments on the left side under Resources
  3. Click the blue button labeled "Create Remote Peering Connection" and give it a name, for example "Ashburn RPC"

Once you have the Remote Peering Connections created on both DRGs, the next step is to peer the two RPCs as each one needs to know about the other before peering can happen.  The process is simple, you only need to tell one DRG RPC what the other RPC OCID is and they will establish a peering relationship.  You can do this in either of the RPCs, we will outline the steps to initiate a peering relationship from Phoenix.

On the Ashburn DRG:

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Remote Peering Connection Attachments on the left side under Resources
  3. Click on the Remote Peering Connection you named earlier, in our example it's "Phoenix RPC"
  4. Click the Copy option in the top right under OCID:.  This will copy the Ashburn RPC OCID into your clipboard as we will need it to establish the peering in Phoenix.

On the Phoenix DRG:

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Remote Peering Connection Attachments on the left side under Resources
  3. Click on the Remote Peering Connection you named earlier, in our example it's "Ashburn RPC"
  4. Click on the blue button labeled "Establish Connection"
  5. Select the region, in our example it's Ashburn which is us-ashburn-1
  6. And paste the Ashburn RPC OCID in your clipboard in the Remote Peering Connection OCID field
  7. Click on the blue button labeled "Establish Connection"

If you recall earlier from the Introduction section, the new enhanced DRG will autogenerate two DRG routing tables.  One for the VCN attachment, and the other is for all other attachments (RPC, IPSec, FC).  So far in Scenario 1 and Scenario 2 those autogenerated routing tables work without any changes, however for Scenario 3 we will need to make some changes here.  The issue is the autogenerated DRG route table for the RPC, IPSec, FC is by default configured with an import route distribution that imports routes from VCN attachments only.  This worked for us in Scenario 1 and Scenario 2, however for this Scenario it means that the on premise route for 192.168.0.0/16 will not propagate over the RPC connection to Phoenix, and vice versa as the Phoenix route for VCN 172.16.0.0/24 will not propagate to on premise.  In order to accomplish this we will create two separate route tables in Ashburn, one for the IPSec/VC attachment and the other for the RPC attachment and we will be specific on what types of routes to import. 

On the Ashburn DRG - Create Import Route Distribution for On Prem

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Import Route Distributions on the left side under Resources
  3. Click on the blue button labeled Create Import Route Distribution
  4. Name it Import_Onprem
  5. Under Route Distribution Statements create two statements with different priority numbers.  One with Attachment Type RPC and the other with Attachment Type VCN
  6. Click blue button labeled Create Import Route Distribution

On the Ashburn DRG - Create Import Route Distribution for RPC

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on Import Route Distributions on the left side under Resources
  3. Click on the blue button labeled Create Import Route Distribution
  4. Name it Import_RPC
  5. Under Route Distribution Statements create a statements with Attachment Type IPSec Tunnel or Virtual Circuit depending on your on prem connection
  6. Click blue button labeled Create Import Route Distribution

On the Ashburn DRG - Create Route Table for On Prem

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on DRG Route Tables on the left side under Resources
  3. Click on the blue Create DRG Route Table button
  4. We will name this Onprem_RT
  5. Leave the static routes empty
  6. Click Show Advanced Options
  7. Select the Enable Import Route Distribution button and select the Import_Onprem Route Distribution created above
  8. Click the blue button labeled Create DRG Route Table 

On the Ashburn DRG - Create Route Table for RPC

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on DRG Route Tables on the left side under Resources
  3. Click on the blue Create DRG Route Table button
  4. We will name this RPC_RT
  5. Leave the static routes empty
  6. Click Show Advanced Options
  7. Select the Enable Import Route Distribution button and select the Import_RPC Route Distribution created above
  8. Click the blue button labeled Create DRG Route Table 

On the Ashburn DRG - Apply the new Route Tables to the Attachments

  1. Click on the DRG under Networking >> Customer Connectivity >> Dynamic Routing Gateways
  2. Click on the IPSec or Virtual Circuit Attachments on the left side under Resources
  3. Click on the Attachment Name
  4. Click Edit and select the DRG Route Table Onprem_RT
  5. Click the blue button labeled Save Changes
  6. Click on the RPC Attachments on the left side under Resources
  7. Click on the Attachment Name
  8. Click Edit and select the DRG Route Table RPC_RT
  9. Click the blue button labeled Save Changes

You should now be able to reach the 172.16.0.0/24 network from on prem 192.168.0.0/16.

Enhanced DRG Release Notes

OCI DRG Documentation

On-premises access to multiple DRGs and VCNs in different regions through a single connection

Enhanced DRG Product Announcement

 

 

 

Sours: https://www.ateam-oracle.com/post/basic-routing-scenarios-for-the-enhanced-drg
CloudGuard HA Deployment Walkthrough on Oracle Cloud

Expanded DRG functionality

DRG functionality has been expanded to include the following capabilities:

  • You can attach a DRG to more than one VCN to provide inter-VCN network connectivity. VCNs can be in the same or different tenancies. 
  • You can now assign a different route table and policy to each network resource attached to your DRG enabling granular routing control.  For instance, by connecting all your VCNs and on-premises networks to a single DRG used as a “Hub,” you have a single central gateway to configure traffic routing and Layer 3 isolation.  One possible use case of routing policy is directing all traffic passing thru the DRG to a network virtual appliance or firewall.
  • Your on-premises network connected to a DRG in one region can access networks connected to a DRG in a different region using a remote peering connection (RPC).
  • You can now enable equal cost multi-path (ECMP) routing towards your IPSec VPN and FastConnect connections to support active-active scenarios. ECMP is controlled on a per route table basis.
  • Remote peering connections can now connect DRGs in the same region or different tenancies.

This capability is available in the US West (San Jose) and Canada Southeast (Montreal) regions only. See Dynamic Routing Gateways (DRGs) for more information.

 

 

Sours: https://docs.oracle.com/iaas/releasenotes/changes/298fc8e3-d539-47a9-bbee-fd27bb4b80b2/

You will also be interested:

FastConnect with Multiple DRGs and VCNs

In this scenario, you have a single FastConnect that connects your existing on-premises network to Oracle Cloud Infrastructure. That FastConnect has at least one physical connection, or cross-connect .

In Oracle Cloud Infrastructure, you have multiple VCNs, all in the same region. Each VCN has its own DRG. For each VCN, there's a private virtual circuit that runs on the FastConnect and terminates at your CPE on one end, and on the VCN's DRG on the other end. The private virtual circuit enables communication that uses private IP addresses between the VCN and the on-premises network. See the following diagram.

This image shows the layout of VCNs connected to your on-premises network, each with its own private virtual circuit and DRG.

For example, imagine that each department in your organization has its own subnet in your on-premises network and a corresponding departmental VCN in Oracle Cloud Infrastructure. You want to enable private communication between each department's subnet and VCN over the FastConnect.

Or, perhaps all the departments need to communicate with all the VCNs. For example, instead perhaps the VCNs are for separate development, test, and production environments, and each department needs access to all three VCNs.

The FastConnect and virtual circuits give you the general private connection where none of the traffic traverses the internet. You can separately control which on-premises subnets and VCNs can communicate by configuring route rules in your on-premises network and VCN route tables. You can optionally configure VCN security rules and other firewalls that you maintain to allow only certain types of traffic (such as SSH) between your on-premises network and VCN.

Public Peering

You can also set up public peering on that same FastConnect by creating a public virtual circuit. In the following diagram, the public virtual circuit is shown separate from the private virtual circuits. It terminates at Oracle's edge. The public virtual circuit enables communication that uses public IP addresses but does not traverse the internet.

All public resources in a VCN can be reachable over public peering if there is internet access. See FastConnect Public Peering Advertised Routes for more detail. For other important details about how you can control route preferences when you have multiple connections between your on-premises network and Oracle, see Routing Details for Connections to Your On-Premises Network.

This image is similar to the earlier one, but also includes a public virtual circuit.

When you set up public peering for your FastConnect, the public IP prefixes that you designate for the public virtual circuit are advertised to all the VCNs in your tenancy. The routes advertised to your on-premises network are all the Oracle Cloud Infrastructure public IP addresses (including the CIDRs for each of the VCNs in the tenancy).

Important

Your network receives Oracle's public IP addresses through both FastConnect and your Internet Service Provider (ISP). When configuring your edge, give higher preference to FastConnect over your ISP, or you will not receive the benefits of FastConnect. If you plan to also set up private access to Oracle services through one of the VCNs, see the important routing details in Routing Details for Connections to Your On-Premises Network.

For more information, see Basic Network Diagrams.

Sours: https://docs.oracle.com/en-us/iaas/Content/Network/Concepts/fastconnectmultipledrgs.htm


42 43 44 45 46