Wednesday, February 20, 2019

how to create a dmg for a mac from linux

You need first to create a dmg and then copy the files inside.

The following creates a dmg file (128M) that can be mounted on the mac. 

sudo apt install hfsprogs -y dd if=/dev/zero of=/tmp/dmgtmp.dmg bs=1M count=128 mkfs.hfsplus -v ThisIsADmg /tmp/dmgtmp.dmg


Then you can mount in linux with something like

mount -o loop /tmp/dmgtmp.dmg /mnt/dmgtmp

then copy the content in it (in /mnt/dmgtmp). 

After that you can can copy the dmg on a mac and mount it there.

:) my two cents Alex 

Monday, January 28, 2019

docker containers on VM does not get the correct MTU

I noticed sometime ago the following issue with docker/LXD container on top of a VM hosted by OpenStack:

apt-get hangs when called within an LXD or docker container. 

For instance: $ docker run -it ubuntu bash 
# apt-get update 
0% [Waiting for headers] 

This only occurs in Ubuntu Xenial, not on Trusty or CentOS. 

There is an easy workaround based on iptables to clamp the MTU: 
$ sudo iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

if you use juju this is annoying since for example juju bootstrap fails.
App[arently it could be related to this issue:

On the LXD side there is also another workaround that can be applied to the machine hosting the LXD containers: 

"""
lxc profile device remove default <interface on the LXD bridge name>
lxc profile device add default  <interface on the LXD bridge name> nic nictype=bridged parent=lxdbr0 mtu=1400
"""


my 2 cents 

Monday, January 7, 2019

How to create and use Application Credentials in OpenStack Rocky

I recently got my nose into application credentials to avoid exposing plain passwords of my OpenStack account in several places (like juju, kubernetes or any other consumer of an OpenStack cloud backend).

Here a few steps to get and test your application credentials:
  • open your dashboard 
  • if you are allowed to create application credentials you should see a button on the left side under identity like:
  • then fill the fields and download the rc file with credentials on a CLI with O~S client
  • then use it to set the environment vars (i.e. following)
#!/usr/bin/env bash

export OS_AUTH_TYPE=v3applicationcredential
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME="garr-ct1"
export OS_INTERFACE=public
export OS_APPLICATION_CREDENTIAL_ID=3df050279bc1490c871a52d49c3b5030
export OS_APPLICATION_CREDENTIAL_SECRET=<what you choosed as secret> 
  • use i.e. the following command to get a token: 
>openstack token issue


The documentation about how to deal with API post/get is quite complete here:
https://developer.openstack.org/api-ref/identity/v3/#create-application-credential

As an example you can use the following to create a token out of your app credentials (after generating them with a secret):

>curl -i -H "Content-Type: application/json" -d ' { "auth": { "identity": { "methods": ["application_credential"],  "application_credential": {  "id": "<your app credentials id>", "secret": "<your secret>"}}}}' "https://keystone.cloud.garr.it:5000/v3/auth/tokens"


HTTP/1.1 201 Created
Date: Mon, 07 Jan 2019 15:21:44 GMT
Server: Apache/2.4.18 (Ubuntu)
X-Subject-Token: fd00f5811b054d45aab98e346e794a73
Vary: X-Auth-Token
X-Distribution: Ubuntu
x-openstack-request-id: req-e94efb0a-6d93-46be-9e1c-77317b0cdfbd
Content-Length: 13819
Content-Type: application/json

{"token": {"is_domain": false, "methods": ["application_credential"], "roles": [{"id": "f526fd6908794fcf8c70804fa6cdc8a3", "name": "Member"}], "application_credential": {"restricted": true, "id": "e9a7089baad04bf093fc2a5a665e4f5c", "name": "terraform"}, "is_admin_project": false, "project": {"domain": {"id": "2b932823d0dc46799acbfabd18b45ee4", "name": "cloudusers"}, "id": "21b5b236c2e244dcba9557ec8745d61a", "name": "olimpiadi-istat"}, "catalog": [{"endpoints": [......]

Hope this will save you some time Alex 


Friday, September 28, 2018

LXD Increase ZFS loop Storage

Today I had the need to grow a loop device which was the root of a ZFS that was hosting several LXC managed via LXD.

I discovered that LXD doesn't let you directly grow a loop backed ZFS pool, but you can do so with:

sudo truncate -s +XXXG /var/lib/lxd/disks/<POOL>.img  sudo zpool set autoexpand=on lxd  sudo zpool online -e lxd /var/lib/lxd/disks/<POOL>.img  sudo zpool set autoexpand=off lxd

Monday, January 23, 2017

Windows Image Creation from every Operating

Requirements :
Procedure :

Boot Virtual Box VM from Windows ISO
Qcow disk type
40GB Root Disk
Load your Windows ISO to the Primary CD drive
Add a secondary CD drive and attach the VirtIO ISO to it

Proceed with Windows installation
Load VirtIO drivers from the attached ISO during the installation
Enable RDP & user account 
ref http://www.andreamonguzzi.it/windows-server-2012-installare-e-configurare-il-ruolo-rds/
Once Image spawn and get into machine Enable RDP, Create User account with admin Privileges.

Install Cloudbase-Init
http://www.cloudbase.it/downloads/CloudbaseInitSetup_Beta.msi

Overwrite the default configuration file at C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.conf with the following

[DEFAULT]
username=Admin
groups=Administrators
inject_user_password=true
plugins=cloudbaseinit.plugins.windows.sethostname.SetHostNamePlugin,cloudbaseinit.plugins.windows.createuser.CreateUserPlugin,cloudbaseinit.plugins.windows.networkconfig.NetworkConfigPlugin,cloudbaseinit.plugins.windows.sshpublickeys.SetUserSSHPublicKeysPlugin,cloudbaseinit.plugins.windows.extendvolumes.ExtendVolumesPlugin,cloudbaseinit.plugins.windows.userdata.UserDataPlugin
network_adapter=
config_drive_raw_hhd=true
config_drive_cdrom=true
verbose=true
logdir=C:\Program Files\Cloudbase Solutions\Cloudbase-Init\log\
logfile=cloudbase-init.log

Disable Windows Firewall or setup the services that you want to allow for a finer control (this was just a rough test)

Apply customization to image
Install packages, add users, modify configurations, etc.
Run Windows Update
Run Sysprep:
C:\Windows\System32\sysprep\sysprep.exe /generalize /oobe /shutdown

Convert disk image into qcow2
Sorry to get into Ubuntu.. We need some what. VirtualBox only supports Qcow images, not Qcow2, so we'll use qemu-img to convert the image to Qcow2 for use with OpenStack. as below
qemu-img convert -f qcow -O qcow2 windows.qcow windows.qcow2
Reboot the VM
Now import the qcow2 image in glance
# glance image-create –name window –is-public=true –disk-format=qcow2 –container-format=bare –file (location of qcow2 image that you want to import into glance ).


please comment with your experience 

reference:
http://docs.openstack.org/image-guide/windows-image.html
https://maestropandy.wordpress.com/2014/12/05/create-a-windows-openstack-vm-with-virtualbox/


Alex Barchiesi

Tuesday, October 18, 2016

root access a VM without root password

so here is how to get into your VMs without knowing the root pass or having the ssh key to reach them. 
Basically we are going to mount a qemu disk image and make the changes needed. 
In order to mount a QUMU / KVM disk image you need to use qemu-nbd, which lets you use the NBD protocol to share the disk image on the network.

sudo modprobe nbd max_part=8
sudo qemu-nbd -c /dev/nbd0 -P 1 /var/lib/libvirt/images/img_name.qcow2
sudo mount /dev/nbd0p1 /mnt/kvm

Do whatever change you need to do (i.e. to have root access on ubuntu change the /etc/ssh/sshd_config file + the /etc/shadow with the appropriate hash of the pass)

sudo umount /mnt/kvm
sudo nbd-client -d /dev/nbd0


That's it.
Hope this helps Alex Barchiesi

Thursday, July 28, 2016

LXC manual migration on ZFS

easy notes to migrate an LXC container to zfs filesystem
I'll use as an example a zfs pool called vd_lxc_container

basically if the <LXC_name> is the non-zfs LXC and <LXC_ZFS> is the zfs version here are the commands to issue:

lxc-stop -n <LXC_name>
mv /var/lib/lxc/<LXC_name> /var/lib/lxc/<LXC_name>_OLD
lxc-copy -B zfs -n <LXC_name>_OLD -N <LXC_ZFS>
zfs list 
in case needed change the mount point and rename the pool:
zfs set mountpoint=/var/lib/lxc/<LXC_damigrare>/rootfs vd_lxc_container/<LXC_ZFS>
zfs rename vd_lxc_container/<LXC_ZFS> vd_lxc_container/<LXC_damigrare>
(zfs list)
Modify config file to match names and mac address in case needed 
lxc-start -n <LXC_ZFS>
lxc-destroy -s -n <LXC_name>_OLD

best Alex