YONGFEIUALL

izheyi.com


  • Home

  • Archives

  • Categories

  • Tags

  • About

  • Search

Terraform - Modules

Posted on 2018-04-29 | In Terraform |

Modules in Terraform are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components in Terraform as well as for basic code organization.

Can find Terraform Module here: Terraform Module Registry

Module basics

Any set of Terraform configuration files in a folder is a module. E.g.,

  1. Create folder layout

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    ├─module
    │ └─s3
    │ main.tf
    │ outputs.tf
    │ variables.tf
    │ user-data.sh
    │
    └─qa
    └─service
    main.tf
    outputs.tf
    variables.tf
  2. Create a module to set S3

    1
    2
    3
    4
    5
    6
    7
    # S3
    resource "aws_s3_bucket" "terraform_module" {
    bucket = "yongfeiuall-terraform-module"
    versioning {
    enabled = true
    }
    }
  3. Using a module

    1
    2
    3
    4
    5
    6
    7
    provider "aws" {
    region = "ap-northeast-1"
    }

    module "module-test" {
    source = "../../module/s3"
    }
  4. Run terraform init, terraform plan and terraform apply commands

    1
    2
    3
    4
    D:\terraform\ModuleStudy\qa\service>terraform init
    Initializing modules...
    - module.module-test
    Getting source "../../module/s3"

Module inputs

Modules can have input parameters, too. To define them, you use a
mechanism you’re already familiar with: input variables.

  1. Open ‘\module\s3\variables.tf’, and add

    1
    2
    3
    variable "yongfeiuall_module_bucket" {
    description = "The name of the S3 bucket"
    }
  2. Open ‘\module\s3\main.tf’, and update

    1
    2
    3
    4
    5
    6
    7
    8
    # S3
    resource "aws_s3_bucket" "terraform_module" {
    bucket = "{var.yongfeiuall_module_bucket}"
    versioning {
    enabled = true
    }

    }
  3. Open ‘\qa\servicemodule\s3\main.tf’, and update

    1
    2
    3
    4
    5
    6
    7
    8
    9
    provider "aws" {
    region = "ap-northeast-1"
    }

    module "module-test" {
    source = "../../module/s3"

    yongfeiuall_module_bucket = "yongfeiuall_variable"
    }
  4. Run terraform init, terraform plan and terraform apply commands

Module outputs

Outputs are a way to tell Terraform what data is important.

  1. Open ‘\module\s3\output.tf’, and add

    1
    2
    3
    output "s3_bucket_name" {
    value = "${aws_s3_bucket.terraform_module.bucket}"
    }
  2. Open ‘\qa\servicemodule\s3\main.tf’, and update: add a security group, which name is use s3 bucket name.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    provider "aws" {
    region = "ap-northeast-1"
    }

    module "module-test" {
    source = "../../module/s3"

    yongfeiuall_module_bucket = "yongfeiuall_variable"
    }

    resource "aws_security_group" "http" {
    name = "${module.module-test.s3_bucket_name}"

    # HTTP access from anywhere
    ingress {
    from_port = "${var.server_port}"
    to_port = "${var.server_port}"
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

    # Outbound internet access
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    lifecycle {
    create_before_destroy = true
    }
    }
  3. Run terraform init, terraform plan and terraform apply commands

Module versioning

We recommend explicitly constraining the acceptable version numbers for each external module to avoid unexpected or unwanted changes.

Module Repo

instead of using a folder you can use a git repository to version your module.

  • Parameters
    The URLs for Git repositories support the following query parameters:
    1
    2
    3
    4
    # ref - The ref to checkout. This can be a branch, tag, commit, etc.

    module "consul" {
    source = "git::https://hashicorp.com/consul.git?ref=0.0.2"

Terraform Registry

You can use a specific version for a module if you are using a module registry like the Terraform Registry。

1
2
3
4
module "consul" {
source = "git::https://hashicorp.com/consul.git"

version = "0.0.2

The Terraform Registry is a public registry. For private use you must use the Private Registry available ine the Enterprise version.

Terrafile

Manages external Terraform modules, controlled by a Terrafile.
Refer here Terrafile for detail infomation.

Module gotchas

File paths

By default, Terraform interprets the path relative to the current working directory. That works if you’re using the file function in a Terraform configuration file that’s in the same directory as where you’re running terraform apply.

You can use path.module to convert to a path that is relative to the module folder.

1
2
3
4
5
6
data "template_file" "user_data" {
template = "${file("${path.module}/user-data.sh")}"

vars {
}
}

Terraform - State

Posted on 2018-04-26 | In Terraform |

Terraform must store state about your managed infrastructure and configuration. This state is used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

terraform.tfstate, using simple JSON format, when you run Terraform, it can fetch the latest status of services from AWS and compare that to what’s in your Terraform configurations to determine what changes need to be applied.

Shared Storage for State Files

By default, Terraform stores in local. Because this file must exist, it makes working with Terraform in a team complicated since it is a frequent source of merge conflicts. Remote state helps alleviate these issues.
After version 0.9, terraform introduce “backends” to replace “remote state”, Click here for detail

  1. Create a S3 bucket

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    provider "aws" {
    region = "ap-northeast-1"
    }

    # S3
    resource "aws_s3_bucket" "terraform_state" {
    bucket = "yongfeiuall_terraform_state"
    versioning {
    enabled = true
    }
    lifecycle {
    prevent_destroy = true
    }
    }
  2. configure store state to S3
    a. add this in terraform file

    1
    2
    3
    4
    5
    6
    7
    8
    terraform {
    backend "s3" {
    bucket = "yongfeiuall-terraform-state"
    key = "global/s3/terraform.tfstate"
    region = "ap-northeast-1"
    encrypt = "true"
    }
    }

    b. execute terraform init

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    D:\terraform\Example\stateS3>terraform init

    Initializing the backend...
    Do you want to copy existing state to the new backend?
    Pre-existing state was found while migrating the previous "local" backend to the
    newly configured "s3" backend. No existing state was found in the newly
    configured "s3" backend. Do you want to copy this state to the new "s3"
    backend? Enter "yes" to copy and "no" to start with an empty state.

    Enter a value: yes


    Successfully configured the backend "s3"! Terraform will automatically
    use this backend unless the backend configuration changes.

    Initializing provider plugins...

    The following providers do not have any version constraints in configuration,
    so the latest version was installed.

    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.

    * provider.aws: version = "~> 1.15"

    Terraform has been successfully initialized!

    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.

    If you ever set or change modules or backend configuration for Terraform,
    rerun this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.

With remote state enabled, Terraform will automatically pull the latest state from
this S3 bucket before running a command, and automatically push the latest state
to the S3 bucket after running a command.
E.g., add the below info in terraform file, and run terraform apply again.

1
2
3
output "s3_bucket_arn" {
value = "${aws_s3_bucket.terraform_state.arn}"
}

Locking state files

If the state file is stored remotely so that many people can access it, then you risk multiple people attempting to make changes to the same file at the exact same time. So we need to provide a mechanism that will “lock” the state if its currently in-use by another user. We can accomplish this by creating a dynamoDB table for terraform to use.

  1. Create the dynamoDB table
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    provider "aws" {
    region = "ap-northeast-1"
    }

    # create a dynamodb table for locking the state file
    resource "aws_dynamodb_table" "dynamodb-terraform-state-lock" {
    name = "yongfeiualll-terraform-state-lock"
    hash_key = "LockID"
    read_capacity = 20
    write_capacity = 20

    attribute {
    name = "LockID"
    type = "S"
    }

    tags {
    Name = "DynamoDB Terraform State Lock Table"
    }
    }

Run terraform apply

  1. Modify the Terraform S3 backend resource

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    terraform {
    backend "s3" {
    bucket = "yongfeiuall-terraform-lock"
    dynamodb_table = "yongfeiualll-terraform-state-lock"
    key = "global/s3/terraform.tfstate"
    region = "ap-northeast-1"
    encrypt = "true"
    }
    }

    provider "aws" {
    region = "ap-northeast-1"
    }

    # S3
    resource "aws_s3_bucket" "terraform_lock" {
    bucket = "yongfeiuall-terraform-lock"
    versioning {
    enabled = true
    }
    lifecycle {
    prevent_destroy = true
    }
    }
  2. Run terraform init, terraform plan and terraform apply commands

Isolating state files

Recommend using separate Terraform folders (and therefore separate state files) for each environment, and within each environment, each “component.”
Here is the file layout for our typical Terraform project:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
stage
└ vpc
└ services
└ frontend-app
└ backend-app
└ variables.tf
└ outputs.tf
└ main.tf
└ data-storage
└ mysql
└ redis
mgmt
└ vpc
└ services
└ bastion-host
└ jenkins
global
└ iam
└ route53

  • stage: an environment for non-production workloads
  • mgmt: an environment for DevOps tooling (e.g. bastion host, Jenkins).
  • global: a place to put resources that are used across all environments, such as user management (IAM in AWS) and DNS management (Route53 in AWS).

Within each environment, we have separate folders for each “component.”

Within each component, we have the actual Terraform templates, which we organize according to the following naming conventions:

  • variables.tf: input variables.
  • outputs.tf: output variables.
  • main.tf: the actual resources.

Terraform - Deploy Server - Multiple Servers

Posted on 2018-04-24 | In Terraform |

现实中不可能只有单一的Server,会弹性运行多个Server来保证有足够的Server可用。

Auto Scaling Group

可以利用ASG来实现多个Web Servers。

  1. 创建Config文件
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    variable "server_port" {
    description = "The port the server will use for HTTP requests"
    default = 80
    }

    data "aws_availability_zones" "all" {}

    provider "aws" {
    region = "ap-northeast-1"
    }

    resource "aws_autoscaling_group" "yonfeiuall_scaling_group" {
    launch_configuration = "${aws_launch_configuration.yongfeiuall_launch_config.id}"
    availability_zones = ["${data.aws_availability_zones.all.names}"]
    min_size = 2
    max_size = 10
    tag {
    key = "Name"
    value = "yongfeiuall-asg"
    propagate_at_launch = true
    }
    }

    resource "aws_launch_configuration" "yongfeiuall_launch_config" {
    image_id = "ami-28ddc154"
    instance_type = "t2.micro"

    user_data = <<-EOF
    #!/bin/bash
    yum update -y
    yum install -y httpd
    service httpd start
    echo '<html><h1> configurable web server from terraform </h1></html>' > /var/www/html/index.html
    EOF

    lifecycle {
    create_before_destroy = true
    }
    security_groups = ["${aws_security_group.http.id}"]
    }

    resource "aws_security_group" "http" {
    name = "yonfeiuall_single_web"

    # HTTP access from anywhere
    ingress {
    from_port = "${var.server_port}"
    to_port = "${var.server_port}"
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

    # Outbound internet access
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    lifecycle {
    create_before_destroy = true
    }
    }
  • lifecycle block to any resource to configure how that resource should be created, updated, or destroyed.
  • A data source represents a piece of read-only information that is fetched from the provider every time you run Terraform. To use the data source, you reference it using the following syntax: "${data.TYPE.NAME.ATTRIBUTE}".
  1. 执行命令terraform plan,terraform apply
  2. 验证成功

Elastic Load Balancer

ASG能实现多个Servers,但是有多个IP啊,可能通过ELB来实现对外只有一个DNS。

  1. 创建Config文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    variable "server_port" {
    description = "The port the server will use for HTTP requests"
    default = 80
    }
    output "elb_dns_name" {
    value = "${aws_elb.yongfeiuall_elb.dns_name}"
    }

    data "aws_availability_zones" "all" {}

    provider "aws" {
    region = "ap-northeast-1"
    }

    # elb
    resource "aws_elb" "yongfeiuall_elb" {
    name = "yongfeiuall-asg-elb"
    availability_zones = ["${data.aws_availability_zones.all.names}"]
    security_groups = ["${aws_security_group.http.id}"]

    listener {
    lb_port = "${var.server_port}"
    lb_protocol = "http"
    instance_port = "${var.server_port}"
    instance_protocol = "http"
    }

    health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    interval = 30
    target = "HTTP:${var.server_port}/"
    }
    }

    # asg
    resource "aws_autoscaling_group" "yonfeiuall_scaling_group" {
    launch_configuration = "${aws_launch_configuration.yongfeiuall_launch_config.id}"
    availability_zones = ["${data.aws_availability_zones.all.names}"]
    load_balancers = ["${aws_elb.yongfeiuall_elb.name}"]

    min_size = 2
    max_size = 10
    tag {
    key = "Name"
    value = "yongfeiuall-asg"
    propagate_at_launch = true
    }
    }

    # launch configuration
    resource "aws_launch_configuration" "yongfeiuall_launch_config" {
    image_id = "ami-28ddc154"
    instance_type = "t2.micro"

    user_data = <<-EOF
    #!/bin/bash
    yum update -y
    yum install -y httpd
    service httpd start
    echo '<html><h1> configurable web server from terraform </h1></html>' > /var/www/html/index.html
    EOF

    lifecycle {
    create_before_destroy = true
    }
    security_groups = ["${aws_security_group.http.id}"]
    }

    # security group
    resource "aws_security_group" "http" {
    name = "yonfeiuall_single_web"

    # HTTP access from anywhere
    ingress {
    from_port = "${var.server_port}"
    to_port = "${var.server_port}"
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

    # Outbound internet access
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    lifecycle {
    create_before_destroy = true
    }
    }
  2. 执行命令terraform plan,terraform apply

  3. 验证成功

Clean Up

这个非常简单:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
D:\terraform\Example>terraform destroy
aws_security_group.http: Refreshing state... (ID: sg-3a619542)
data.aws_availability_zones.all: Refreshing state...
aws_elb.yongfeiuall_elb: Refreshing state... (ID: yongfeiuall-asg-elb)
aws_launch_configuration.yongfeiuall_launch_config: Refreshing state... (ID: terraform-201804240854003
23000000001)
aws_autoscaling_group.yonfeiuall_scaling_group: Refreshing state... (ID: tf-asg-2018042408541147700000
0002)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy

Terraform will perform the following actions:

- aws_autoscaling_group.yonfeiuall_scaling_group

- aws_elb.yongfeiuall_elb

- aws_launch_configuration.yongfeiuall_launch_config

- aws_security_group.http


Plan: 0 to add, 0 to change, 4 to destroy.

Do you really want to destroy?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value:

Terraform - Deploy Server - Single Server

Posted on 2018-04-23 | In Terraform |

做一些简单的联系,Deploy一些不同的Server。

Pre-condition

为了让Terraform和AWS工作,必须把AWS User的AWS_ACCESS_KEY_ID和AWS_SECRET_ACCESS_KEY加到环境变量。

1
2
set AWS_ACCESS_KEY_ID=(your access key id)
set AWS_SECRET_ACCESS_KEY=(your secret access key)

Single Server

  1. 创建Config文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    provider "aws" {
    region = "us-east-1"
    }

    resource "aws_instance" "yongfeiuall" {
    ami = "ami-1853ac65"
    instance_type = "t2.micro"
    tags{
    Name = "simple single server"
    }
    }
  2. 执行命令terraform plan,terraform apply

  3. 验证成功

Single Web Server

  1. 创建Config文件
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    provider "aws" {
    region = "ap-northeast-1"
    }

    resource "aws_instance" "yongfeiuall" {
    ami = "ami-28ddc154"
    instance_type = "t2.micro"

    tags{
    Name = "simple web server"
    }
    user_data = <<-EOF
    #!/bin/bash
    yum update -y
    yum install -y httpd
    service httpd start
    echo '<html><h1> single web server from terraform </h1></html>' > /var/www/html/index.html
    EOF

    vpc_security_group_ids = ["${aws_security_group.http.id}"]
    }

    resource "aws_security_group" "http" {
    name = "yonfeiuall_single_web"

    # HTTP access from anywhere
    ingress {
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

    # Outbound internet access
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }

说明:

  • AWS默认不允许任何Incoming和Outcoming,我们要建一个Security Group(要同时有Inbound和Outbound)
  • 把SG要添加到EC2上,用到SG的ID,Terraform里,用"${TYPE.NAME.ATTRIBUTE}"语法来引用其他Resources的Attribute
  • The <<-EOF and EOF are allows you to create multiline strings without having to insert newline characters all over the place.
  1. 执行命令terraform plan,terraform apply
  2. 验证成功
    1
    2
    yongfeiuall@automation:~$ curl http://13.113.195.209
    <html><h1> single web server from terraform </h1></html>

用Browser打开,可以看到。

Configurable Web Server

为了更好的管理,Terraform允许定义Input变量:

1
2
3
variable "NAME" {
[CONFIG ...]
}

The body of the variable declaration can contain three parameters, all of them
optional:

  • description
    Use this parameter to document how a variable is used.
  • default
    There are a number of ways to provide a value for the variable.
  • type
    Must be one of “string”, “list”, or “map”.
    E.g.,
    1
    2
    3
    4
    5
    variable "list_example" {
    description = "An example of a list in Terraform"
    type = "list"
    default = [1, 2, 3]
    }

用下面的方式来读取变量"${var.VARIABLE_NAME}"。

同时,Terraform还允许定义Output变量:

1
2
3
output "NAME" {
value = VALUE
}

可以返回创建Instance后你想获得的一些属性,例如:

1
2
3
output "public_ip" {
value = "${aws_instance.example.public_ip}"
}

  1. 创建Config文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    variable "server_port" {
    description = "The port the server will use for HTTP requests"
    default = 80
    }
    output "public_ip" {
    value = "${aws_instance.yongfeiuall.public_ip}"
    }

    provider "aws" {
    region = "ap-northeast-1"
    }

    resource "aws_instance" "yongfeiuall" {
    ami = "ami-28ddc154"
    instance_type = "t2.micro"

    tags{
    Name = "configurable web server"
    }
    user_data = <<-EOF
    #!/bin/bash
    yum update -y
    yum install -y httpd
    service httpd start
    echo '<html><h1> configurable web server from terraform </h1></html>' > /var/www/html/index.html
    EOF

    vpc_security_group_ids = ["${aws_security_group.http.id}"]
    }

    resource "aws_security_group" "http" {
    name = "yonfeiuall_single_web"

    # HTTP access from anywhere
    ingress {
    from_port = "${var.server_port}"
    to_port = "${var.server_port}"
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

    # Outbound internet access
    egress {
    from_port = 0
    to_port = 0
    protocol = "-1"
    cidr_blocks = ["0.0.0.0/0"]
    }
    }
  2. 执行命令terraform plan,terraform apply

  3. 验证成功

Terraform - 基础

Posted on 2018-04-23 | In Terraform |

这里我们基于前面学习的AWS进行学习。

Terraform 是一个 IT 基础架构自动化编排工具,IaaS, 基础架构即代码。具体的说就是可以用代码来管理维护IT资源。并且在真正运行之前可以看到执行计划。由于状态保存到文件中,因此能够离线方式查看资源情况。

可以去官网查看更多详细信息:Terraform

Install

下载安装包Here,解压压缩包,为了更好的应用,把它添加环境变量PATH。

打开命令行,验证能用:

1
2
3
4
5
C:\Users\yongfei.hu>terraform --version
Terraform v0.11.7


C:\Users\yongfei.hu>

HashiCorp Configuration Language (HCL)

Terraform用HCL来编写,并保存在后缀有.tf的文件。

provider

在main.tf文件中

1
2
3
provider "aws" {
region = "us-east-1"
}

  • 用AWS做为Provider。
  • 部署你的infrastructure到US-EAST-1 Region。

resources

对每一种Provider,都有很多不同的Resources进行管理。

1
2
3
resource "PROVIDER_TYPE" "NAME" {
[CONFIG ...]
}

  • PROVIDER is the name of a provider
  • TYPE is the type of resources to create in that provider
  • NAME is an identifier you can use throughout the Terraform code to refer to this resource
  • CONFIG consists of one or more configuration parameters that are specific to that resource
    E.g.,
    1
    2
    3
    4
    resource "aws_instance" "example" {
    ami = "ami-40d28157"
    instance_type = "t2.micro"
    }

Run

命令行进入到main.tf的目录执行 Terraform Command。

terrafrom plan

1
2
3
4
5
6
> terraform plan
Refreshing Terraform state in-memory prior to plan...
(...)
+ aws_instance.example
ami: "ami-40dddded"
(...)

The plan command lets you see what Terraform will do before actually making
any changes。

  • resources with a plus sign (+) are going to be created
  • resources with a minus sign (–) are going to be deleted
  • resources with a tilde sign (~) are going to be modified.

    terraform apply

    这才是真正执行命令,去AWS上进行相应的操作。

    terraform graph

    To show the dependency graph

AWS - VPC

Posted on 2018-04-23 | In AWS |

Amazon Virtual Private Cloud (Amazon VPC) 允许您在 AWS 云中预配置出一个采用逻辑隔离的部分,在这个部分中,您可以在自己定义的虚拟网络中启动 AWS 资源。

VPC允许用户在一个隔离的虚拟网络中提供计算资源,设定局域网,用户可以简便的由自己选择对外开放什么服务,也可以为资料库或较高保安要求的档案设定网络隔离,以虚拟的方式设定私有云中的的网络和系统架构,模拟以单个用户为单位的数据中心。

或许你要逃离的远方,并不是你想要的生活

Posted on 2018-04-22 | In 随想 |

从过完年,莫名的就感到好累,每天都像一个陀螺一样不停的转,有点迷茫,再加上抢人大战,有个想法就时不时的在脑海里游出来,为了孩子,为了家庭,到底是留下还是离开会更好一些。那就索性把它放出来说道说道。

一心想在一线城市发展,因为大城市的机会多,你当初来这里或许也是逼不得已的。大城市有很好的包容性,你哭你笑,你成功还是失意,你可以在规则内做你想做的事情而不必在意别人的目光,你可以按照自己的意愿过你自己想要的生活。

但其实大城市也是无情的,高昂的房价,让人麻木的快节奏,它不会在乎你的死活,你走还是留,它一如既往。特别是奇葩的户籍制度将我们划分了“三六九等”,福利,养老,医疗,教育…所有的东西都跟户籍挂钩,你没有户籍,就算发展的再好,有房有车,你是所谓的中产,也会没有安全感,归属感。而在大城市你能获得户籍的机会也只能呵呵,尽管现在也开始有了积分落户。

一线城市容不下肉身,三四线放不下灵魂。老家是基本回不去了,尽管那是你的户籍所在地,尽管你的亲情和记忆在那里,但世俗的眼光,至上的关系,格格不入的思想,会让你感到无力和失落,甚至是孤独。

现在二线城市抢人的节奏真不敢想象,有段子说,你出西安车站跟警察问路,人直接问你什么学历,满足条件,先落个户吧。几个月就进了二十几万人,人是来了,后续的基础设施能跟上吗?人来了直接导致的就是房价上涨,你去了还能够买的起吗?还有就业,教育,医疗,等等等等。

社会变化再快,政策再诱惑,这个时候更应该静下心来思考和衡量,什么才是自己真正想要的生活,不要到头来只是从大城市的焦虑转变成非一线的焦虑。本质上讲没有选择是完美的,做出任何选择势必会放弃一些其他的东西。

AWS - Route 53

Posted on 2018-04-22 | In AWS |

DNS全称为Domain Name System,即域名系统,其作用就是将我们经常使用的“网址”解析为IP地址。

Amazon Route 53 是一种可用性高、可扩展性强的云域名系统 (DNS) Web 服务。

域名与IP之间的对应关系,称为”记录”(record)。要创建记录,需要先选择一个路由策略,它决定了 Amazon Route 53 响应查询的方式:

  • Simple
    就是一般的,跟其他的DNS没差别。用于为您的域执行给定功能的单一资源。
  • Weighted
    用于按照您指定的比例将流量路由到多个资源。比如:
    有两个IP,第一个Weight=10,第二个Weight=5,那么第一个IP就获得10/(10+5)=2/3的流量。
  • Latency
    就是把请求发到反应最快的那个服务中心。如果您的应用程序托管在多个 Amazon EC2 区域中,您可以通过从延迟最低的 Amazon EC2 区域处理用户的请求来帮助用户提高性能。
  • Failover
    允许您将流量路由到某个资源 (如果该资源正常) 或路由到其他资源 (如果第一个资源不正常)。
  • Geolocation
    根据用户的位置来路由流量。
  • Multivalue answer
    让 Route 53 用随机选择的正常记录 (最多八条) 响应 DNS 查询。

AWS - AMI

Posted on 2018-04-21 | In AWS |

Amazon 系统映像 (AMI) 是一种包含软件配置 (例如,操作系统、应用程序服务器和应用程序) 的模板。通过 AMI,您可以启动实例,实例是作为云中虚拟服务器运行的 AMI 的副本。您可以启动多个 AMI 实例,如下图所示。

AMI生命周期:

AMI有两种类型:EBS & Instance。

可以从Instance和EBS Snapshot创建。建议使用EBS。

还可以对AMI进行共享,复制和取消注册等,有需要再细看吧。

AWS - Elastic Load Balancing & Auto Scaling

Posted on 2018-04-21 | In AWS |

ELB

Elastic Load Balancing 在多个可用区中的多个 EC2 实例之间分配应用程序的传入流量。这可以提高应用程序的容错能力。Elastic Load Balancing 会检测运行状况不佳的实例,并且仅将流量路由到运行状况良好的实例。

面向 Internet 的负载均衡器有一个可公开解析的 DNS 名称,因此可以通过 Internet 将请求从客户端路由到已向负载均衡器注册的 EC2 实例。

E.g., 创建ELB后,只有一个公共的DNS。

Auto Scaling

You can use Auto Scaling to manage Amazon EC2 capacity automatically, maintain the right number of instances for your application, operate a healthy group of instances, and scale it according to your needs.

您可创建 EC2 实例的集合,称为 Auto Scaling 组 。您可以指定每个 Auto Scaling 组中最少的实例数量,Auto Scaling 会确保您的组中的实例永远不会低于这个数量。您可以指定每个 Auto Scaling 组中最大的实例数量,Auto Scaling 会确保您的组中的实例永远不会高于这个数量。如果您在创建组的时候或在创建组之后的任何时候指定了所需容量,Auto Scaling 会确保您的组一直具有此数量的实例。如果您指定了扩展策略,则 Auto Scaling 可以在您的应用程序的需求增加或降低时启动或终止实例。

简单的一个小例子

配置完成后,自动生成3个Instances,并加到配置的ELB中。通过EC2 public ip和ELB DNS都可以正常访问到静态页面。

我把两个Instances Terminated,因为设置了desired capacity is 3,它会自动再启动两个Instances,并自动加到ELB中。

1…141516…40
唐胡璐

唐胡璐

i just wanna live while i am alive

393 posts
42 categories
74 tags
RSS
LinkedIn Weibo GitHub E-Mail
Creative Commons
© 2022 唐胡璐
Powered by Hexo
|
Theme — NexT.Pisces v5.1.4