summaryrefslogtreecommitdiffstats
path: root/inventory
Commit message (Collapse)AuthorAgeFilesLines
* fixed build problems with openshift-ansible-inventory.specThomas Wiest2015-05-071-2/+2
|
* Allow option in multi_ec2 to set cache location.Kenny Woodson2015-05-072-2/+27
|
* openshift_fact and misc fixesJason DeTiberus2015-05-061-2/+4
| | | | | | | | | | | | | | | | - Do not attempt to fetch file to same file location when playbooks are run locally on master - Fix for openshift_facts when run against a host in a VPC that does not assign internal/external hostnames or ips - Fix setting of labels and annotations on node instances and in openshift_facts - converted openshift_facts to use json for local_fact storage instead of an ini file, included code that should migrate existing ini users to json - added region/zone setting to byo inventory - Fix fact related bug where deployment_type was being set on node role instead of common role for node hosts
* Add ansible_connection=local to localhost in inventoryJason DeTiberus2015-04-243-3/+3
|
* Adding refresh-cache option and cleanup for pylint. Also updated for ↵Kenny Woodson2015-04-222-46/+53
| | | | aws/hosts/ being added.
* Fix typos... master not materJason DeTiberus2015-04-201-1/+1
|
* Fix libvirt metadata used to store ansible tagsLénaïc Huard2015-04-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to https://libvirt.org/formatdomain.html#elementsMetadata , the `metadata` tag can contain only one top-level element per namespace. Because of that, libvirt stored only the `deployment-type-{{ deployment_type }}` tag. As a consequence, the dynamic inventory reported no `env-{{ cluster }}` group. This is problematic for the `terminate.yml` playbook which iterates over `groups['tag-env-{{ cluster-id }}]` The symptom is that `oo_hosts_to_terminate` was not defined. In the end, as Ansible couldn’t iterate on the value of `groups['oo_hosts_to_terminate']`, it iterated on its letters: ``` TASK: [Destroy VMs] *********************************************************** failed: [localhost] => (item=['g', 'destroy']) => {"failed": true, "item": ["g", "destroy"]} msg: virtual machine g not found failed: [localhost] => (item=['g', 'undefine']) => {"failed": true, "item": ["g", "undefine"]} msg: virtual machine g not found failed: [localhost] => (item=['r', 'destroy']) => {"failed": true, "item": ["r", "destroy"]} msg: virtual machine r not found failed: [localhost] => (item=['r', 'undefine']) => {"failed": true, "item": ["r", "undefine"]} msg: virtual machine r not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found failed: [localhost] => (item=['u', 'destroy']) => {"failed": true, "item": ["u", "destroy"]} msg: virtual machine u not found failed: [localhost] => (item=['u', 'undefine']) => {"failed": true, "item": ["u", "undefine"]} msg: virtual machine u not found failed: [localhost] => (item=['p', 'destroy']) => {"failed": true, "item": ["p", "destroy"]} msg: virtual machine p not found failed: [localhost] => (item=['p', 'undefine']) => {"failed": true, "item": ["p", "undefine"]} msg: virtual machine p not found failed: [localhost] => (item=['s', 'destroy']) => {"failed": true, "item": ["s", "destroy"]} msg: virtual machine s not found failed: [localhost] => (item=['s', 'undefine']) => {"failed": true, "item": ["s", "undefine"]} msg: virtual machine s not found failed: [localhost] => (item=['[', 'destroy']) => {"failed": true, "item": ["[", "destroy"]} msg: virtual machine [ not found failed: [localhost] => (item=['[', 'undefine']) => {"failed": true, "item": ["[", "undefine"]} msg: virtual machine [ not found failed: [localhost] => (item=["'", 'destroy']) => {"failed": true, "item": ["'", "destroy"]} msg: virtual machine ' not found failed: [localhost] => (item=["'", 'undefine']) => {"failed": true, "item": ["'", "undefine"]} msg: virtual machine ' not found failed: [localhost] => (item=['o', 'destroy']) => {"failed": true, "item": ["o", "destroy"]} msg: virtual machine o not found failed: [localhost] => (item=['o', 'undefine']) => {"failed": true, "item": ["o", "undefine"]} msg: virtual machine o not found etc… ```
* Configuration updates for latest builds and major refactorJason DeTiberus2015-04-1414-37/+227
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configuration updates for latest builds - Switch to using create-node-config - Switch sdn services to use etcd over SSL - This re-uses the client certificate deployed on each node - Additional node registration changes - Do not assume that metadata service is available in openshift_facts module - Call systemctl daemon-reload after installing openshift-master, openshift-sdn-master, openshift-node, openshift-sdn-node - Fix bug overriding openshift_hostname and openshift_public_hostname in byo playbooks - Start moving generated configs to /etc/openshift - Some custom module cleanup - Add known issue with ansible-1.9 to README_OSE.md - Update to genericize the kubernetes_register_node module - Default to use kubectl for commands - Allow for overriding kubectl_cmd - In openshift_register_node role, override kubectl_cmd to openshift_kube - Set default openshift_registry_url for enterprise when deployment_type is enterprise - Fix openshift_register_node for client config change - Ensure that master certs directory is created - Add roles and filter_plugin symlinks to playbooks/common/openshift-master and node - Allow non-root user with sudo nopasswd access - Updates for README_OSE.md - Update byo inventory for adding additional comments - Updates for node cert/config sync to work with non-root user using sudo - Move node config/certs to /etc/openshift/node - Don't use path for mktemp. addresses: https://github.com/openshift/openshift-ansible/issues/154 Create common playbooks - create common/openshift-master/config.yml - create common/openshift-node/config.yml - update playbooks to use new common playbooks - update launch playbooks to call update playbooks - fix openshift_registry and openshift_node_ip usage Set default deployment type to origin - openshift_repo updates for enabling origin deployments - also separate repo and gpgkey file structure - remove kubernetes repo since it isn't currently needed - full deployment type support for bin/cluster - honor OS_DEPLOYMENT_TYPE env variable - add --deployment-type option, which will override OS_DEPLOYMENT_TYPE if set - if neither OS_DEPLOYMENT_TYPE or --deployment-type is set, defaults to origin installs Additional changes: - Add separate config action to bin/cluster that runs ansible config but does not update packages - Some more duplication reduction in cluster playbooks. - Rename task files in playbooks dirs to have tasks in their name for clarity. - update aws/gce scripts to use a directory for inventory (otherwise when there are no hosts returned from dynamic inventory there is an error) libvirt refactor and update - add libvirt dynamic inventory - updates to use dynamic inventory for libvirt
* Add libvirt as a providerLénaïc Huard2015-04-102-0/+4
|
* Add byo playbooks and enterprise docsJason DeTiberus2015-04-032-0/+38
| | | | | | | | - added byo playbooks - added byo (example) inventory - added a README_OSE.md for getting started with Enterprise deployments - Added an ansible.cfg as an example for configuration helpful for playbooks/roles
* openshift_facts role/module refactor default settingsJason DeTiberus2015-04-036-18/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Add openshift_facts role and module - Created new role openshift_facts that contains an openshift_facts module - Refactor openshift_* roles to use openshift_facts instead of relying on defaults - Refactor playbooks to use openshift_facts - Cleanup inventory group_vars - Update defaults - update openshift_master role firewall defaults - remove etcd peer port, since we will not be supporting clustered embedded etcd - remove 8444 since console now runs on the api port by default - add 8444 and 7001 to disabled services to ensure removal if updating - Add new role os_env_extras_node that is a subset of the docker role - previously, we were starting/enabling docker which was causing issues with some installations - Does not install or start docker, since the openshift-node role will handle that for us - Only adds root to the dockerroot group - Update playbooks to use ops_env_extras_node role instead of docker role - os_firewall bug fixes - ignore ip6tables for now, since we are not configuring any ipv6 rules - if installing package do a daemon-reload before starting/enabling service - Add aws support to bin/cluster - Add list action to bin/cluster - Add update action to bin/cluster - cleanup some stray debug statements - some variable renaming for clarity
* Automatic commit of package [openshift-ansible-inventory] release [0.0.2-1].Thomas Wiest2015-03-261-1/+9
|
* added the ability to have a config file in /etc/openshift_ansible to ↵Thomas Wiest2015-03-252-4/+19
| | | | multi_ec2.py.
* Merge pull request #97 from jwhonce/wip/clusterJhon Honce2015-03-245-0/+20
|\ | | | | Use ansible playbook to initialize openshift cluster
| * gce inventory/playbook updates for node registration changesJason DeTiberus2015-03-245-0/+16
| |
| * Various fixesJason DeTiberus2015-03-241-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - playbooks/gce/openshift-cluster: - Remove some stray debugging statements - Some minor formatting fixes - removing un-necessary quotes - cleaning up some jinja templates for readability - add a play to the launch playbook to apply the os_update_latest role on all hosts in the new environment - improve setting groups and gce_public_ip when using add_host module - set gce_public_ip as a variable for the host using the returned gce instance_data - add a group for each tag configured on the host (pre-pending tag_ to the tag name) - update the openshift-master/config.yml and openshift-node/config.yml includes to use the tag_env-host-type groups - openshift-{master,node}/config.yml - Some cleanup - remove some extraneous quotes - remove connection: ssh from remote hosts, since it is the default - remove user: root and instead set ansible_ssh_user in inventory/gce/group_vars/all - set openshift_public_ip and openshift_env to templated values in inventory/gce/group_vars/all as well - no longer set openshift_node_ips for the master host, since nodes will register themselves now when they are configured (prevent reboot on adding nodes) - move setting openshift_master_ips and openshift_public_master_ips using set_fact and instead use the vars: of the 'Configure Instances' play
* | Automatic commit of package [openshift-ansible-inventory] release [0.0.1-1].Thomas Wiest2015-03-241-1/+4
| |
* | Added spec files and tito configs.Thomas Wiest2015-03-241-0/+37
|/
* Merge pull request #66 from lhuard1A/explicit_python2Thomas Wiest2015-03-093-3/+3
|\ | | | | Explicitely use python2
| * Explicitely use python2Lénaïc Huard2015-02-193-3/+3
| | | | | | | | | | Some distributions are using python3 as the default python. On those ones, we need to explicitely use python2.
* | fixed bug in new ec2.py destination_format codeThomas Wiest2015-03-091-2/+8
| |
* | Add flexible destination format string to ec2.pyAndy Grimm2015-03-042-1/+12
| | | | | | | | | | This allows us to construct hostnames from a format string plus ec2 tag values.
* | Sync ec2.py with upstreamAndy Grimm2015-03-041-61/+239
|/
* Updated to the latest gce.py from upstream. It includes _meta and hostvars!!!Thomas Wiest2015-02-131-3/+15
|
* Removed comments and cleaned up code.Kenny Woodson2015-02-051-1/+0
|
* Attempting to only refresh cache when doing --list on ossh.Kenny Woodson2015-02-052-3/+16
|
* added opssh.pyThomas Wiest2015-01-281-0/+1
|
* fixed bug in multi_ec2.py where it was only allowing relatively path'd ↵Thomas Wiest2014-12-181-0/+4
| | | | providers if you ran multi_ec2.py from the inventory directory.
* unittest for merge_destructively. More to comeKenny Woodson2014-12-181-4/+6
| | | | | | | | Added a readme so its obvious how to run tests Leaving this alone. Getting cleaned up in next PR Fixing space
* changed multi_ec2.py to print the json result string instead of the python ↵Thomas Wiest2014-12-182-12/+9
| | | | pretty print string.
* Variable-ized the config file path with the name.Kenny Woodson2014-12-181-1/+1
|
* Added default environment behavoir for aws credentialsKenny Woodson2014-12-181-8/+30
|
* Updated the function name to accurately reflect its procedure.Kenny Woodson2014-12-121-3/+3
|
* Updated merge function to merge recursivelyKenny Woodson2014-12-122-24/+37
|
* Fixed naming for chache file.Kenny Woodson2014-12-121-1/+1
|
* In order to agree with previous renaming these naming changes were made.Kenny Woodson2014-12-123-2/+2
|
* Updated with the class name.Kenny Woodson2014-12-121-2/+2
|
* Updated name to multi_ec2 instead of meta.Kenny Woodson2014-12-122-0/+1
|
* First version of meta inventory.Kenny Woodson2014-12-123-0/+199
|
* removed gce.ini and instead added instructions for setting up secrets.pyThomas Wiest2014-10-291-47/+0
|
* Added atomic aws host to cloud.rbThomas Wiest2014-10-232-0/+666
|
* Initial Commit. Sharing is caringKenny Woodson2014-09-163-0/+324