Binary Options Strategies & Trading Systems Revealed

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

Student Loan Default: The Guide (ReUploaded)

NOTE: I'm pasting this guide from where I originally found it, over on Studentloandefaulters. It was originally pasted there from someone who found it after the original was deleted.

Student Loan Default: The Guide (reuploaded)

📷
The original guide that was recently deleted here: https://www.reddit.com/studentloandefaulters/comments/cg1fd7/student_loan_default_a_guide/
I take no credit for this post, just happened to have it saved in a document and thought I'd be doing an injustice by not sharing this information once I saw the original post was missing! All credit goes to the original author, and without further ado...
Student Loan Default: A Guide
I’ve been wanting to write this for a long time, and seeing that person be in $500,000 of debt and no one really helping him on studentloans, I felt it was time to summarize everything I’ve learned. While there is great information on this sub, it is not centralized. It requires some digging. I hope now to bring all of it to the surface.
Definitions:
Strategic Default: When a borrower realizes that he or she can spend less money by not paying a loan. The borrower waits out the statute of limitations and then either settles or waits the debt out.
Shills: People who are paid to prevent the spread of student loan default information
Statute of Limitations: The number of years your state requires before a debt can no longer be collected.
Cosigner: The poor person who is just as legally required to pay your loans as you are
Foreign Earned Income Tax Exclusion: A tax rule that states any US citizen can earn up to about $100,000 a year in another country and report their US taxes as 0.
Fraudulent Transfer: When a party tries to move assets to someone else in order to avoid a lien on their property.
Lien: Essentially when the government slaps a bill onto your property forcing you to pay off a debt before you can sell the property.
Income Based Repayment (IBR): Federal loans can be paid with 15% of your discretionary income (money earned after taxes) instead of a higher, unpayable amount
Aggregate Student Loan Limit: The total amount a student can take out before the federal government or a private lender stops authorizing new loans
Wage Garnishment: When a court forces your employer to take out a certain percentage of your paycheck to pay back a debt
Bank Levy: When the government or a court takes all of the money directly out of your bank account to pay a debt
Private Loans: Loans that originate from anyone but the federal government. These loans have a statute of limitations and less power but higher interest rates.
Federal Loans: These loans have no statute of limitations, the government can collect anything you earn to get these back, and they come with IBR which is manageable
Sallie Mae: The worst private lender on the market. They only offer deferment for four short years.
Forbearance: A period where you do not have to pay your student loans, but interest accrues.
Deferment: A period where you do not have to pay your student loans, but interest does not accrue.
Credit Score: A number that tells people how responsible of a borrower you are.
Student Loan Tax Bomb: After you have paid for 10 - 25 years on your federal loans, you are forgiven the rest. That is considered income by the IRS. You then add this “income” to your regular income for the year and pay the tax. It can be over $10,000.
Insolvency: When you are unable to pay your debts. This works well for defusing the student loan tax bomb.
Public Service Loan Forgiveness: If you work for 10 years at a government job, you can get your entire federal student loan balance forgiven. In 2019, the feds are making it near impossible to collect. This could change.
A note on cosigners before we begin: Look, your cosigner is probably going to be very mad at you. Prepare for your relationship to be strained. You need to try and get them on the same page as you, and I do offer a tactic here to at least shift all of the financial burden off of your cosigner below. If you decide to do any of these tactics without getting your cosigner off the hook, there could be more risk involved if you or your cosigners have a lot of assets.
Strategy
Student loan default is a strategy. And to have a good strategy, one must plan as much as possible. You have to know all of your options. While strategy is your overall game plan, tactics are the individual options you have to get your strategy accomplished. Below are the tactics that you can employ to beat the student loan companies.
Tactics
Paying Your Loans: [low risk] In the rare chance you have anywhere between $1,000 to $20,000 in federal student loans and you have completed your bachelor’s degree, you should probably just pay the damn loans. All you have to do is set up an auto debit and forget about it. It will be about 15% of your income. You really want to try and avoid consolidating if you can, because it will count against some of your IBR payments. You would also lose your grace period if you did this. At the end of 10 to 25 years, you will be forgiven all of the loan amount you did not pay. That forgiven amount is considered income by the IRS, so you will be put into a higher tax bracket. I would get an accountant when this comes. In your case, your tax bomb will be low enough where you could probably just pay it. If you want to really shake things up though, you are welcome to try either the Asset Creation Tactic or the Madlad Method below. Here is more information on Income Based Repayment: https://www.studentdebtrelief.us/repayment-plans/income-based-repayment-plan/
Default Private IBR Federal (Staying Put): [low risk] The standard strategy here on studentloandefaulters. As mentioned above, for the federal loans, it’s best to just IBR and automatically debit your bank account each month and forget about it. For the private loans, this is where the game begins. Your overall plan here is to default, wait out the statute of limitations in your home state, and either settle the debt for less than 30% or just hope they leave you alone and you don’t pay at all. From this moment on, whatever you would have paid for your private monthly bill, sock that money away. Once you go past 120 days of no payments, you are in default. This is where the phone calls come in. They will start to harass you. They will call your work, your cell phone, your cosigner, etc relentlessly. Most likely, they’ll start doing this before you get to default. As they call you, you can either just give them the cold shoulder or start immediately acting like you do not own the debt. Never admit that you own the debt. Tell them you think they are crazy and have the wrong person. Inform your cosigner to do the same. Once your loans are sold to a collection agency, wait until they call you and ask for verification of the debt. If they do not provide it, you won. Chances are, they will be able to verify it, so just make sure you never admit to the debt on the phone or make a payment. If you make a payment, you’ll reset the statute of limitations. Do not give them five dollars, two dollars, a penny. If they do sue you, show up for court. Get a lawyer if you can afford it. You have to show up to court, or they win automatically. Even if you don’t have a lawyer in court, you need to make them verify the debt. You could still lose here. If you do lose in court, go to my tactic of “The Cat and Mouse Game.” They are playing a numbers game, and if you are harder to sue than John Smith down the street, they may prey on him or her instead of you. Now, there are four states in the United States that do not have wage garnishment: Pennsylvania, North Carolina, South Carolina, and Texas. You could move there, and if you have barely any assets, you are considered judgement proof. This means you’re not worth the time to be sued, because you have nothing to take and cannot be garnished. Moving is hard, though, so that’s a personal decision. Also, from what I understand, if you do move to these states, you can switch your statute of limitations over to their states which may be less time until you cannot be sued anymore. If you do lose and just want to stop here, you could get your bank levied and you could be slapped with up to a 25% wage garnishment until paid in full Clarification: a lot of people do not ever get garnished, and bank levies are rare (they are non-existent on federal loans). Do not let this freak you out!. I repeat this is super rare and not likely to happen. Anyways, you have options at this point. If it does happen, try another tactic like leave the country or cat and mouse below.
Default Private Default Federal: [medium risk] Some of the wilder people have attempted to default on both federal and private loans in order to do a cash settlement. The same strategy above in Default Private IBR Federal applies, but realize that the US government could just step in and do an administrative garnish on you eventually. If you were living some sort of cash existence, you could potentially avoid them and then write them a money order and settle for 30% or something. This way, you avoid the tax bomb and would probably pay a lot less interest overall. If you do this and it works, I would love to hear about it.
Cat and Mouse: [medium risk] So, you want to avoid getting sued or you lost a judgement? You don’t have to sit back and take it. u/nowaysalliemae has successfully avoided being sued by essentially going on the run. You see, to be sued successfully, they need to know where you work. If you get sued, move to another state, and switch jobs, they have to do the entire process over again! This means find you, verify the debt, sue you, etc. You can essentially do this until your statute of limitations runs out. And then, you dispute the debt on your credit score. They take it off at that point, and you just saved a lot of money. I decided to put this as medium risk, because moving around a lot would require some luck. Especially since you would need to work wherever you go, there are a lot of moving parts here. I think it is totally doable, and if you are an adventurous personality type, it could be a lot of fun. This only works for the private student loan side, because the US government has a lot more power. You would still IBR your federal loans on this tactic. For more information, go through nowaysalliemae's post history.
Leave the Country: [medium risk] What if you want to avoid all of this altogether? Do you want a reset button on your life? You can just leave the country and start over. Seriously. Your credit score does not follow you across countries. The federal government cannot garnish your paycheck if you work internationally. You are not a criminal doing this. Furthermore, there is something called the Foreign Earned Income Tax Exclusion. Since you will still IBR your federal loans on this plan, as long as you make less than $100,000 in another country, your US income is zero. This means you just got a free education while you make money in another country. Once you pay zero for 25 years, you will have to defuse your student tax bomb. Tactic Below. Private companies do not stand a chance here. There are countries in the commonwealth such as Australia and Canada that are more willing to take you in if you meet certain requirements. You could teach English at a bunch of places. You could apply for residency at these places or be a perpetual tourist. A perpetual tourist is someone who essentially moves to a new country, goes to a neighboring country for a weekend, and then goes back to that new country they are trying to start a new life in*. This in no means you have to go back to the U.S. Ever. For example, you want to live in Panama forever, every 90 days, you take a weekend trip to Nicaragua. You come back to Panama after the weekend is over and get another 90 day pass. Rinse and repeat. This gives you another 90 days in your country of choice. If you make money on the internet, this strategy would work pretty well. You can just be a perpetual tourist or marry someone in another country and start a new life. This will not be a good fit for everyone, but there’s something exciting about this. If you are young, single, and restless, this could be the adventure of a lifetime. Here's more info on being a perpetual traveler and the FEIE: https://www.escapeartist.com/blog/perpetual-traveler-us-tax-code/
Suspend Payment Without More Debt: [low risk] So recently, it has been brought to my attention that there is a community college, Luna Community College (in Las Vegas, NM), that has tuition so low you could go half time all year for about 684 dollars. They have a small amount of associate's degrees. If you just want to stop paying without taking any more loans, this would be the way to do it. You could do this for many years. Luna Community College's tuition matrix: https://luna.edu/tuition_matrix
Convert Private Loans to Federal: [low risk] From this point on, these are my special tactics I’ve been thinking about. They might work really well for some people. So, you have a bunch of federal loans and a good amount of private loans. You don’t want to fight debt collectors or move around. Try this. This plan only works if you have a bachelor’s degree though. Anyways, there is a special loan offered by the US Federal Government called the Graduate Plus Loan. This loan is incredible, because there is no aggregate student loan limit. In other words, you can borrow as much money as you want here. Even a million dollars no questions asked. All you need is no delinquency or default on your credit report. If you do have these things, you can get a cosigner in on the plan. They won’t ever be responsible anyways because you will defuse the tax bomb at the end. This works to your advantage, because you could go back to school at the graduate level, get a diploma mill master’s degree online, use your room and board payment to start paying off your private loans ASAP. Just make sure you are doing whatever your school considers half time enrollment in order to avoid student loan payments while doing this. Once you’ve gone to school long enough and converted all of your private loans to grad plus loans, you could just go on an IBR plan. This will at least make your life manageable. You would have to defuse your student tax bomb once this is over. Tactic below.
Convert Federal Loans to Private: [medium risk] So, what if you wanted to go the opposite way? Maybe you want to convert all of your federal loans to private ones, default, and then leave the country? Hey, maybe there are reasons you want to hurry up the settlement process. You could essentially do the same strategy as above, but instead just borrow from Sallie Mae, Wells Fargo, etc until all of your federal loans are paid off. Then, either cat and mouse or leave the country. I don’t think a lot of people would find a use for this, but hey who knows?
Asset Creation Method: [high risk] What if you wanted to not just pay off your loans but get ahead in life? Maybe you feel like using your student loan debt to your advantage. Thanks to the work done by u/BinaryAlgorithm, you could really come out on top here. Remember those Grad Plus loans we were talking about? Well, there’s nothing stopping you from continually borrowing all year on these loans, investing the room and board, and acting as if you do not have the debt in the first place. While I had originally said that rental property does not count as income, I cannot find any documentation proving this. You can still invest this money however you want, and you just defuse the tax bomb at the end (if anyone can find that documentation, please let me know). I did find that rental properties offer a lot of ways to reduce your adjusted gross income (management fees, advertising, etc), and these could reduce your income closer to zero. We’re not done here. Moreover, you could get a job that qualifies for Public Student Loan Forgiveness, enjoy your investments, and then pay for the 10 years. Be sure to convert all loans to federal before starting this tactic. I only put this as high risk, because the whole plan falls apart if Grad Plus loans get capped. Will they? Probably not, because those are the loans doctors and lawyers take out to go to their professional schools. It would take an act of congress to change the way the law stands now, but still, you should know that. This plan spans decades, so a lot can change. Also, having this many installment loans may lower your credit score over a multitude of years, but based on what everyone has found out here, it's not by much. For more information, go to this subreddit's search bar and type in "aggregate" and go look at BinaryAlgorithm's two posts on the subject.
Defusing the Student Tax Bomb: [low risk] So lucky for you, I talked to an actual lawyer and an actual IRS agent about this. This is completely legal and doable. Okay, so you were a good person and paid your IBR for 25-30 years. What now? Well, you’re about to be hit hard with a tax bomb. All of that money that is now forgiven counts as income on your taxes. This could mean a bill in the tens of thousands if you combined this with any of the other methods here—or just borrowed a lot to begin with. Luckily for us, there is something called insolvency. This means you are unable to pay your debts, and there is a really simple formula for whether or not you are insolvent. As long as you have more liabilities than assets at the time of student loan forgiveness, you are considered insolvent. In other words, right before you are about to be forgiven, like year 24 out of 25, you would take out a loan on something. All you would need to do is buy a house, buy a car, or buy something with a huge price tag. As long as your liabilities are way higher than your assets (like aim for 100K or something more), you are considered insolvent and you don’t have to pay any of the tax bomb. Boom. The IRS agent said this is fine. The lawyer said this is fine. I cannot believe this is fine. Where could you get the money to borrow for a house? Check Asset Creation method above. You could always sell the asset after the tax bomb is dealt with. For more information on defusing the student loan tax bomb: https://lawyerist.com/defusing-student-loan-interest-tax-bomb/
Getting Your Cosigner Off the Hook: So 90% of us have cosigners based on some statistic I read. These people are going to pissed at you, because they get harassed. If you have a lot of time to plan your strategy out, you can simply convert all of your private loans to federal ones. They are no longer responsible. The plan is above. Check out “Convert Private Loans to Federal.” Furthermore, if you are attempting to go the default route with private loans, you could potentially get your cosigner off the hook by refinancing your student loans without the cosigner. After you refinance, you could just default then. You would need good credit and meet certain requirements for this. Also, if you plan on defaulting, you might want to get your cosigner to transfer their assets to their spouse or someone trustworthy. Even though liens are rare, this could give you some peace of mind. As long as about 3-5 years go by, this is no longer considered a fraudulent transfer. Your state will have certain rules about this. If you are from Florida, apparently houses are untouchable there. You will need a lawyer to plan the asset transfer. At the same time, you may not be able to get your cosigner off the hook. Make peace with that. Student loans are brutal, so all you can really do is educate yourself and your cosigner and hope you come out on top.
Madlad Method: [high risk] Now, here comes my personal plan. This is what I’m doing, because I want to live a life on my terms and not really work for anyone my entire life. I’m also not a normal person, so this will probably appear crazy to some or most of you. So at this point, if you understand all of the methods before you, you are a powerful player in the student loan circus. You can do anything from fight the man to maliciously comply and bankrupt the system while becoming upper-middle class. I don’t really care for any of that. I want to go to a tropical paradise and make music for 20 years, so here is my interpretation of everything. I have some federal loans and private loans. I net about 25K a year through the Grad Plus loans, and I work about 4 hours a week in the online classroom. I take that federal loan money, and I sock away a few hundred every month to save up for my private loan settlement in about five years. Since I save 300 every month, I’ll have about 18K in 5 years when I go into default. I will settle ASAP. At the same time, I will continue to go to diploma mill universities, get master's degree after master’s degree, and move to a Latin American country where the cost of living is even lower. This way, my 25K a year puts me in the upper class of that country. I can live where I want and really do whatever I damn well please for as long as the Grad Plus loans are around. As an added bonus, I will already be starting a new life in another country where I can make connections and maybe even get married. I studied linguistics, so I know how to teach English. I can do that if I want a source of income anywhere. So there is my plan, and honestly, one day we might get someone in office who just wipes out all of this debt anyways. If that’s the case, I can just play the waiting game until all of this is over. Here are the rules on adverse credit history and Grad Plus loans: https://studentaid.ed.gov/sa/sites/default/files/plus-adverse-credit.pdf
Final Thoughts: Defaulting on student loans is not immoral or a sin. It is a business decision. Everyone else gets bailouts, why should student borrowers be any different? You’re going to have to ignore the people who tell you why they think you should be a good little slave and pay your loans. Those people are not your friends. Those people are not on your side. Some of the best advice I ever received in life was you have to do what’s best for you. Also, if you have anything you would like to add to this or would like to challenge, please let me know. I want this to be as accurate as possible. I will be looking at this perpetually to make sure there are no errors. Take care. Good luck. You can do this.
submitted by I_Ride_A_Nimbus to StudentLoanEscape [link] [comments]

A trans person's measured take on the trans sports issue

So first of all this post was inspired by GGExMachina's brief statement on the issue:
For example, it is objectively the case that biological men have a physical advantage over women. Yet if someone points this out and suggests that transgender people shouldn’t be allowed to fight in women’s UFC, or women’s soccer or weightlifting competitions or whatever, suddenly you’re some kind of evil monster. Rather than saying that of course trans people shouldn’t be bullied and that we could perhaps have a trans olympics (like the Paralympics and Special Olympics), we are expected to lie.
I've found that this position is incredibly popular among liberals/left-leaning people, especially here on reddit. It seems like, once or twice a month, like clockwork, a thread stating more or less the same thing on /unpopularopinion or /offmychest will get thousands of upvotes. And while I completely understand the thought process that leads otherwise left-leaning people to come to such conclusions, I feel like the issue has been, broadly speaking, dishonestly presented to the general public by a mixture of bad-faith actors and people who have succumbed to the moral panic. And, as I've seen, there are plenty of people in this subreddit and elsewhere who are itching to be as supportive as they possibly can to the trans community but find themselves becoming very disillusioned by this particular issue. By making this post I hope to present a more nuanced take on the issue, not only in regards to my personal beliefs on what kinds of policies are best to preserve fairness in women's sports but also in regards to shining a light on how this issue is often times dishonestly presented in an attempt to impede the progression of pro-trans sentiments in the cultural zeitgeist.

Sex & Gender

The word "transgender" is an umbrella term that refers to people whose gender identities differ from those typically associated with the sex they were assigned at birth. According to the 2015 U.S. Transgender Survey, the approximate composition of "the trans community" in the United States is 29% Transgender men (Female-to-Male), 33% Transgender women (Male-to-Female), and 35% non-binary. (The remaining 3% were survey respondents who self-identified as "crossdressers", who were still included in the survey on the grounds of being gender non-conforming)
While non-binary people, as a group, are probably deserving of their own separate post. the focus of this post will be on trans men and trans women. I will also be primarily focusing on transgender people who pursue medical transition with Hormone-Replacement-Therapy, as they are most relevant to the issue of sports. (Mind that while the majority of binary trans people fit into this camp, there is a sizable minority of trans people who do not feel the need to medically transition.)
What do trans people believe about Gender?
The views of transgender people in regards to Gender are actually pretty varied, although the most prominent positions that I've personally seen are best summed up into two different camps:
  1. The "Trans-Medical" camp
Transgender people who fall into this camp usually consider Gender Dysphoria to be the defining factor of what makes somebody trans. The best way I can describe this camp is that they sort of view being transgender akin to being intersex. Only whereas an intersex person would be born with a disorder that affects the body, a trans person is born with a disorder that affects the brain. Trans people in this camp often times put an emphasis on a clinical course for treatment. For example, a person goes to a psychologist, gets diagnosed with gender dysphoria, starts hormone replacement therapy, pursues surgery, then emerges from this process of either cured of the gender dysphoria or, at the very least, treated to the fullest extent of medical intervention. This position is more or less the original position held by trans activists, back in the day when the word "transsexual" was used instead of "transgender". Though many younger trans people, notably YouTuber Blaire White, also hold this position. Under this position, sex and gender are still quite intertwined, but a trans man can still be considered a man, and a trans woman a woman, under the belief that sex/gender doesn't just refer to chromosomal sex and reproductive organs, but also to neurobiology, genitalia, and secondary sex characteristics. So someone who is transgender, according to this view, is born with the physical characteristics of one sex/gender but the neurobiology of another, and will change their physical characteristics, to the fullest extent medically possible, to match the neurobiology and therefore cure the individual of gender dysphoria.
Critics of this position argue that this mentality is problematic due to being inherently exclusive to transgender people who do not pursue medical transition, whom are often times deemed as "transtrenders" by people within this camp. Many people find it additionally problematic because it is also inherently exclusive to poorer trans people, particularly those in developing nations, who may not have access to trans-related medical care. Note that there are plenty of trans people who *do* have access to medical transition, but nevertheless feel as if the trans community shouldn't gatekeep people who cannot afford or do not desire medical transition, thus believing in the latter camp.
  1. The "Gender Identity" camp
I feel like this camp is the one most popularly criticized by people on the right, but is also probably the most mainstream. It is the viewpoint held by many more left-wing trans people, (Note that in the aforementioned 2015 survey, only 1% of trans respondents voted Republican, so trans people are largely a pretty left-wing group, therefore it makes sense that this position would be the most mainstream) but also notably held by American Psychological Association, the American Psychiatric Association, GLAAD, and other mainstream health organizations and activist groups.
While people in this camp still acknowledge that medical transition to treat gender dysphoria can still be a very important aspect of the transgender experience, it's believed that the *defining* experience is simply having a gender identity different from the one they were assigned at birth. "Gender identity" simply being the internal, personal sense of being a man, a woman, or outside the gender binary.
Many people in this camp, though, still often maintain that gender identity is (at least partially) neurobiological, but differ from the first camp in regards to acknowledging that the issue is less black & white than an individual simply having a "male brain" or a "female brain", but rather that the neurological characteristics associated with gender exist on more of a spectrum, thus leaving the door open to gender non-conforming people who do not identify as trans, as well as to non-binary people. This is where the "gender is a spectrum" phrase comes from.
"52 genders" is a popular right-wing meme that makes fun of this viewpoint, however it is important to note that many trans and non-binary people disagree with the idea of quantifying gender identity to such an absurd amount of individual genders, rather more simply maintaining that there are men, women, and a small portion of people in-between, with a few words such as "agender" or "genderqueer" being used to describe specific identities/presentations within this category.
It's also noteworthy that not all people in this camp believe that neurobiology is the be-all-end-all of gender identity, as many believe that the performativity of gender also plays an integral role in one's identity. (That gender identity is a mixture of neurobiology and performativity is a position held by YouTuber Contrapoints)
Trans people and biological sex
So while the aforementioned "Gender Identity" viewpoint has become quite popularized among liberals and leftists, I have noticed a certain rhetorical mentality/assumption become prevalent alongside it, especially among cisgender people who consider themselves trans-allies:
"Sex and Gender are different. A trans woman is a woman who is biologically male. A trans man is a man who is biologically female"
When "Sex" is defined by someone's chromosomes, or the sex organs they were born with, this is correct. However, there is a pretty good reason why the trans community tends to prefer terms like "Assigned Male at Birth" rather than "Biologically Male". This is done not only for the inclusion of people who are both intersex and transgender (For example, someone can be born intersex but assigned male based on the existence of a penis or micropenis), but also due to the aforementioned viewpoint on divergent neurobiology being the cause for gender dysphoria. Those reasons are why the word "Assigned" is used. But the reason why it's "Assigned Male/Female At Birth" instead of just "Assigned Male/Female" is because among the trans community there exists an understanding of the mutability of sexually dimorphic biology that the general population is often ignorant to. For example, often times people (especially older folks) don't even know of the existence of Hormone Replacement Therapy, and simply assume that trans people get a single "sex change operation" that, (for a trans woman) would just entail the removal of the penis and getting breast implants. Therefore they imagine the process to be "medically sculpting a male to look female" instead of a more natural biological process of switching the endocrine system form male to female or vice versa and letting the body change over the course of multiple years. It doesn't help that, for a lot of older trans people (namely Caitlyn Jenner, who is probably the most high profile trans person sadly), the body can be a lot more resistant to change even with hormones so they *do* need to rely on plastic surgery a lot more to get obvious results)
So what sexually dimorphic bodily characteristics can one expect to change from Hormone Replacement Therapy?
(Note that there is a surprising lack of studies done on some of the more intricate changes that HRT can, so I've put a "*" next to the changes that are anecdotal, but still commonly and universally observed enough among trans people [including myself for the MTF stuff] to consider factual. I've also put a "✝" next to the changes that only occur when people transition before or during puberty)
Male to Female:
Female to Male:
For the sake of visual representation, here are a couple of images from /transtimelines to demonstrate these changes in adult transitioners (I've specifically chosen athletic individuals to best demonstrate muscular changes)
https://preview.redd.it/ntw333p9sbty.jpg?width=640&crop=smart&auto=webp&s=5fe779757dfc4a5dc56566ff648d337c59fbe5cb
https://www.reddit.com/transtimelines/comments/dpca0f/3_years_on_vitamin_t/
Additionally, here's a picture of celebrity Kim Petras who transitioned before male puberty, in case you were wondering what "female pubescent skeletal development" looks like in a trans woman:
https://cdn2.thelineofbestfit.com/images/made/images/remote/https_cdn2.thelineofbestfit.com/portraits/kim_petras_burakcingi01_1107_1661_90.jpg

How does this relate to sports?

Often times, when the whole "transgender people in sports" discussion arises, a logical error is made when *all* transgender people are assumed to be "biologically" their birth sex. For example, when talking about trans women participating in female sports, these instances will be referred to as cases of "Biological males competing against females".
As mentioned before, calling a trans woman "biologically male" strictly in regards to chromosomes or sex organs at birth would be correct. However, not only can it be considered derogatory (the word "male" is colloquially a shorthand for "man", after all), but there are many instances where calling a post-HRT transgender person "biologically [sex assigned at birth]" is downright misleading.
For example, hospitals have, given transgender patients improper or erroneous medical care by assuming treatment based on birth sex where treatment based on their current endocrinological sex would have been more adequate.
Acute Clinical Care of Transgender Patients: A Review
Conclusions and relevance: Clinicians should learn how to engage with transgender patients, appreciate that unique anatomy or the use of gender-affirming hormones may affect the prevalence of certain disease (eg, cardiovascular disease, venous thromboembolism, and osteoporosis), and be prepared to manage specific issues, including those related to hormone therapy. Health care facilities should work toward providing inclusive systems of care that correctly identify and integrate information about transgender patients into the electronic health record, account for the unique needs of these patients within the facility, and through education and policy create a welcoming environment for their care.
Some hosptials have taken to labeling the biological sex of transgender patients as "MTF" (for post-HRT trans women) and "FTM" (for post-HRT trans men), which is a much more medically useful identifier compared to their sex assigned at birth.
In regards to the sports discussion, I've seen *multiple threads* where redditors have backed up their opinions on the subject of trans people in sports with studies demonstrating that cis men are, on average, more athletically capable than cis women. Which I personally find to be a pathetic misunderstanding of the entire issue.
Because we're not supposed to be comparing the athletic capabilities of natal males to natal females, here. We're supposed to comparing the athletic capabilities of *post-HRT male-to-females* to natal females. And, if we're going to really have a fact-based discussion on the matter, we need to have separate categories for pre-pubescent and post-pubescent transitioners. Since, as mentioned earlier, the former will likely have different skeletal characteristics compared to the latter.
The current International Olympic Committee (IOC) model for trans participation, and criticisms of said model
(I quoted the specific guidelines from the International Cycling Union, but similar guidelines exist for all Olympic sports)
Elite Competition
At elite competition levels, members may have the opportunity to represent the United States and participate in international competition. They may therefore be subject to the policies and regulations of the International Cycling Union (UCI) and International Olympic Committee (IOC). USA Cycling therefore follows the IOC guidelines on transgender athletes at these elite competition levels. For purposes of this policy, international competition means competition sanctioned by the UCI or competition taking place outside the United States in which USA Cycling’s competition rules do not apply.
The IOC revised its guidelines on transgender athlete participation in 2015, to focus on hormone levels and medical monitoring. The main points of the guidelines are:
Those who transition from female to male are eligible to compete in the male category without restriction. It is the responsibility of athletes to be aware of current WADA/USADA policies and file for appropriate therapeutic use exemptions.
Those who transition from male to female are eligible to compete in the female category under the following conditions:
The athlete has declared that her gender identity is female. The declaration cannot be changed, for sporting purposes, for a minimum of four years.
The athlete must demonstrate that her total testosterone level in serum has been below 10 nmol/L for at least 12 months prior to her first competition (with the requirement for any longer period to be based on a confidential case-by-case evaluation, considering whether or not 12 months is a sufficient length of time to minimize any advantage in women’s competition).
The athlete's total testosterone level in serum must remain below 10 nmol/L throughout the period of desired eligibility to compete in the female category.
Compliance with these conditions may be monitored by random or for-cause testing. In the event of non-compliance, the athlete’s eligibility for female competition will be suspended for 12 months.
Valid criticisms of the IOC model are usually based on the fact that, even though hormone replacement therapy provokes changes to muscle mass, it does *not* shrink the size of someone's skeleton or cardiovascular system. Therefore an adult-transitioned trans woman could, even after losing all levels of male-typical muscle mass, still have an advantage in certain sports if she had an excessively large skeletal frame, and was participating in a sport where such a thing would be advantageous.
Additionally, the guidelines only require that athletes be able to demonstrate having had female hormone levels for 12-24 months, which isn't necessarily long enough to completely lose musculature gained from training on testosterone (anecdotally it can take 2-4 years to completely lose male-typical muscle mass) So the IOC guidelines don't have any safeguard against, for example, a trans woman training with testosterone as the dominant hormone in her body, and then taking hormones for the bare minimum time period and still having some of the advantage left.
Note that, while lower level sports have had (to the glee of right-wing publications sensationalizing the issue) instances of this exact thing happening, in the 16 years since these IOC guidelines were established, not a single transgender individual has won an Olympic medal
Also note that none of the above criticisms of the IOC policy would apply in regards to the participation of pre-pubescent-transitioned trans women. After all, male-pubescent bone structure and cardiovascular size, and male-typical muscle levels, can't possibly exist if you never went through male puberty to begin with.
What could better guidelines entail, to best preserve fairness in female sports while avoiding succumbing to anti-trans moral panic?
In my personal opinion, sports leagues should pick one of the three above options depending on what best fits the nature of the sport and the eliteness of the competition. For example, extremely competitive contact sports might be better off going with the first option, but an aerobic sport such as marathon running would probably be fine with the third option.

How this issue has been misrepresented by The Right

I'll use Joe Rogan as an example of this last thing:
She calls herself a woman but... I tend to disagree. And, uh, she, um... she used to be a man but now she has had, she's a transgender which is (the) official term that means you've gone through it, right? And she wants to be able to fight women in MMA. I say no f***ing way.
I say if you had a dick at one point in time, you also have all the bone structure that comes with having a dick. You have bigger hands, you have bigger shoulder joints. You're a f***ing man. That's a man, OK? You can't have... that's... I don't care if you don't have a dick any more...
If you want to be a woman in the bedroom and you know you want to play house and all of that other s*** and you feel like you have, your body is really a woman's body trapped inside a man's frame and so you got a operation, that's all good in the hood. But you can't fight chicks. Get the f*** out of here. You're out of your mind. You need to fight men, you know? Period. You need to fight men your size because you're a man. You're a man without a dick.
I'm not trying to discriminate against women in any way, shape, or form and I'm a big supporter of women's fighting. I loved watching that Ronda Rousey/Liz Carmouche fight. But those are actual women. Those are actual women. And as strong as Ronda Rousey looks, she's still looks to me like a pretty girl. She's a beautiful girl who happens to be strong. She's a girl! [Fallon Fox] is not a girl, OK? This is a [transgender] woman. It's a totally different specification.
Calling a trans woman a "man", and equating transitioning to merely removal of the dick, and equating trans women's experiences as women as "playing house" and "being a woman in the bedroom". These things are obviously pretty transphobic, and if Rogan had said these things about just any random trans woman his statements would have likely been more widely seen in that light. But when it's someone having an unfair advantage in sports, and the audience is supposed to be angry with you, it's much more socially acceptable thing to say such things. But the problem is, when you say these kinds of things about one trans woman, you're essentially saying those derogatory things about all trans women by extension. It's the equivalent of using an article about a black home invader who murdered a family as an excuse to use a racial slur.
Now, I'm not saying that Rogan necessarily did this on purpose, in fact I'm more inclined to believe that it was done moreso due to ignorance rather than having an actual ideological agenda. But since then, many right wing ideologues who do have an ideological agenda have used this issue as an excuse to voice their opinions on trans people while appearing to be less bigoted. Ie. "I'm not trying to be a bigot or anything and I accept people's rights to live their lives as they see fit, but we NEED to keep men out of women's sports", as a sly way to call trans women "men".
Additionally, doing this allows them to slip in untrue statements about the biology of trans women. I mean, first of all in regards to the statement "You have bigger hands, you have bigger shoulder joints", obviously even in regards to post-pubescent transitioners, not every trans woman is going to have bigger hands and shoulder joints than every cis woman (My hands are actually smaller than my aunt's!). It's just that people who go through male puberty on average tend to have bigger hands and shoulder joints compared to people who go through female puberty. But over-exaggerating the breadth of sexual dimorphism, as if males and females are entirely different species to each-other, helps to paint the idea of transitioning in a more nonsensical light.
I hope this thread has presented this issue in a better light for anyone reading it. Let me know if you have any thoughts/criticisms of my stances or the ways I went about this issue.
submitted by Rosa_Rojacr to samharris [link] [comments]

Unable to run custom scripts via dmenu when it is started with i3's mod+d key

I have encountered strange behaviour regarding dmenu_run and dmenu_recency. When I run dmenu_run or dmenu_recency from terminal and then execute simple script like echo "test" value test is printed in the terminal. However when I run dmenu_recency or dmenu_run with i3 keybinding like:
bindsym $mod+d exec --no-startup-id dmenu_recency
and then execute same simple script, then nothing happens. Dmenu lunches for other installed programs works well, it just doesen't work for execuution of my custom scripts.
What am I missing here? I suspect I have to add something else to my scripts but i dont know what. For now it is jsut plain this:
echo "test"

EDIT: Ok maybe script: echo "test" is not the best example since it is true that there is no opened terminal to write to.
But same thing happens if I try to execute script that looks like this:
code ~/.i3/config
This jsut opens the i33 config file with visual studio code. Again this works when I execute it via dmenu_run that was called from existing termina but it doesen't work when executed via dmenu_run that was called via i3 keybinding mod+d
EDIT 2:
.i3/config
# i3 config file (v4) # Please see http://i3wm.org/docs/userguide.html for a complete reference! # Set mod key (Mod1=, Mod4=) set $mod Mod4 # My testing shortcuts bindsym $mod+c exec code bindsym $mod+Shift+x exec terminal; exec terminal bindsym $mod+F4 exec /home/erik/Programs/pycharm-community-2020.2.1/bin/pycharm.sh bindsym $mod+Shift+F2 exec /home/erik/CustomScripts/google_calendar # CONFIGURABLE PRINTSCREENS OPTIONS # take a screenshot of a screen region and copy it to a clipboard #bindsym --release Shift+Print exec "ScreenCapture.sh -s /home/erik/Pictures/Screenshots/" # take a screenshot of a whole window and copy it to a clipboard #bindsym --release Print exec "ScreenCapture.sh /home/erik/Pictures/Screenshots/" # set default desktop layout (default is tiling) # workspace_layout tabbed  # Configure border style  default_border pixel 2 default_floating_border normal # Hide borders hide_edge_borders none # change borders bindsym $mod+u border none bindsym $mod+y border pixel 1 bindsym $mod+n border normal # You can also use any non-zero value if you'd like to have a border (this is to prevent issues with gaps) # for_window [class=".*"] border pixel 1 # Font for window titles. Will also be used by the bar unless a different font # is used in the bar {} block below. font xft:URWGothic-Book 11 # Use Mouse+$mod to drag floating windows floating_modifier $mod # start a terminal bindsym $mod+Return exec terminal # kill focused window bindsym $mod+Shift+q kill # start program launcher # bindsym $mod+d exec --no-startup-id dmenu_recency bindsym $mod+d exec --no-startup-id home/erik/CustomScripts/redit_solution dmenu_recency # launch categorized menu bindsym $mod+z exec --no-startup-id morc_menu ################################################################################################ ## sound-section - DO NOT EDIT if you wish to automatically upgrade Alsa -> Pulseaudio later! ## ################################################################################################ #exec --no-startup-id volumeicon #bindsym $mod+Ctrl+m exec terminal -e 'alsamixer' exec --no-startup-id start-pulseaudio-x11 exec --no-startup-id pa-applet bindsym $mod+Ctrl+m exec pavucontrol ################################################################################################ # Screen brightness controls # bindsym XF86MonBrightnessUp exec "xbacklight -inc 10; notify-send 'brightness up'" # bindsym XF86MonBrightnessDown exec "xbacklight -dec 10; notify-send 'brightness down'" # Start Applications bindsym $mod+Ctrl+b exec terminal -e 'bmenu' bindsym $mod+F2 exec chromium bindsym $mod+F3 exec pcmanfm # bindsym $mod+F3 exec ranger bindsym $mod+Shift+F3 exec pcmanfm_pkexec bindsym $mod+F5 exec terminal -e 'mocp' bindsym $mod+t exec --no-startup-id pkill compton bindsym $mod+Ctrl+t exec --no-startup-id compton -b bindsym $mod+Shift+d --release exec "killall dunst; exec notify-send 'restart dunst'" bindsym Print exec --no-startup-id i3-scrot bindsym $mod+Print --release exec --no-startup-id i3-scrot -w bindsym $mod+Shift+Print --release exec --no-startup-id i3-scrot -s bindsym $mod+Shift+h exec xdg-open /usshare/doc/manjaro/i3_help.pdf bindsym $mod+Ctrl+x --release exec --no-startup-id xkill focus_follows_mouse no # change focus bindsym $mod+j focus left bindsym $mod+k focus down bindsym $mod+l focus up bindsym $mod+semicolon focus right # alternatively, you can use the cursor keys: bindsym $mod+Left focus left bindsym $mod+Down focus down bindsym $mod+Up focus up bindsym $mod+Right focus right # move focused window bindsym $mod+Shift+j move left bindsym $mod+Shift+k move down bindsym $mod+Shift+l move up bindsym $mod+Shift+semicolon move right # alternatively, you can use the cursor keys: bindsym $mod+Shift+Left move left bindsym $mod+Shift+Down move down bindsym $mod+Shift+Up move up bindsym $mod+Shift+Right move right # workspace back and forth (with/without active container) workspace_auto_back_and_forth yes bindsym $mod+b workspace back_and_forth bindsym $mod+Shift+b move container to workspace back_and_forth; workspace back_and_forth # split orientation bindsym $mod+h split h;exec notify-send 'tile horizontally' bindsym $mod+v split v;exec notify-send 'tile vertically' bindsym $mod+q split toggle # toggle fullscreen mode for the focused container bindsym $mod+f fullscreen toggle # change container layout (stacked, tabbed, toggle split) bindsym $mod+s layout stacking bindsym $mod+w layout tabbed bindsym $mod+e layout toggle split # toggle tiling / floating bindsym $mod+Shift+space floating toggle # change focus between tiling / floating windows bindsym $mod+space focus mode_toggle # toggle sticky bindsym $mod+Shift+s sticky toggle # focus the parent container bindsym $mod+a focus parent # move the currently focused window to the scratchpad bindsym $mod+Shift+minus move scratchpad # Show the next scratchpad window or hide the focused scratchpad window. # If there are multiple scratchpad windows, this command cycles through them. bindsym $mod+minus scratchpad show #navigate workspaces next / previous bindsym $mod+Ctrl+Right workspace next bindsym $mod+Ctrl+Left workspace prev # Workspace names # to display names or symbols instead of plain workspace numbers you can use # something like: set $ws1 1:mail # set $ws2 2: set $ws1 1 set $ws2 2 set $ws3 3 set $ws4 4 set $ws5 5 set $ws6 6 set $ws7 7 set $ws8 8 # switch to workspace bindsym $mod+1 workspace $ws1 bindsym $mod+2 workspace $ws2 bindsym $mod+3 workspace $ws3 bindsym $mod+4 workspace $ws4 bindsym $mod+5 workspace $ws5 bindsym $mod+6 workspace $ws6 bindsym $mod+7 workspace $ws7 bindsym $mod+8 workspace $ws8 # Move focused container to workspace bindsym $mod+Ctrl+1 move container to workspace $ws1 bindsym $mod+Ctrl+2 move container to workspace $ws2 bindsym $mod+Ctrl+3 move container to workspace $ws3 bindsym $mod+Ctrl+4 move container to workspace $ws4 bindsym $mod+Ctrl+5 move container to workspace $ws5 bindsym $mod+Ctrl+6 move container to workspace $ws6 bindsym $mod+Ctrl+7 move container to workspace $ws7 bindsym $mod+Ctrl+8 move container to workspace $ws8 # Move to workspace with focused container bindsym $mod+Shift+1 move container to workspace $ws1; workspace $ws1 bindsym $mod+Shift+2 move container to workspace $ws2; workspace $ws2 bindsym $mod+Shift+3 move container to workspace $ws3; workspace $ws3 bindsym $mod+Shift+4 move container to workspace $ws4; workspace $ws4 bindsym $mod+Shift+5 move container to workspace $ws5; workspace $ws5 bindsym $mod+Shift+6 move container to workspace $ws6; workspace $ws6 bindsym $mod+Shift+7 move container to workspace $ws7; workspace $ws7 bindsym $mod+Shift+8 move container to workspace $ws8; workspace $ws8 # Open applications on specific workspaces # assign [class="Thunderbird"] $ws1 # assign [class="Pale moon"] $ws2 # assign [class="Pcmanfm"] $ws3 # assign [class="Skype"] $ws5 # Open specific applications in floating mode for_window [title="alsamixer"] floating enable border pixel 1 for_window [class="calamares"] floating enable border normal for_window [class="Clipgrab"] floating enable for_window [title="File Transfer*"] floating enable for_window [class="fpakman"] floating enable for_window [class="Galculator"] floating enable border pixel 1 for_window [class="GParted"] floating enable border normal for_window [title="i3_help"] floating enable sticky enable border normal for_window [class="Lightdm-settings"] floating enable for_window [class="Lxappearance"] floating enable sticky enable border normal for_window [class="Manjaro-hello"] floating enable for_window [class="Manjaro Settings Manager"] floating enable border normal for_window [title="MuseScore: Play Panel"] floating enable for_window [class="Nitrogen"] floating enable sticky enable border normal for_window [class="Oblogout"] fullscreen enable for_window [class="octopi"] floating enable for_window [title="About Pale Moon"] floating enable for_window [class="Pamac-manager"] floating enable for_window [class="Pavucontrol"] floating enable for_window [class="qt5ct"] floating enable sticky enable border normal for_window [class="Qtconfig-qt4"] floating enable sticky enable border normal for_window [class="Simple-scan"] floating enable border normal for_window [class="(?i)System-config-printer.py"] floating enable border normal for_window [class="Skype"] floating enable border normal for_window [class="Timeset-gui"] floating enable border normal for_window [class="(?i)virtualbox"] floating enable border normal for_window [class="Xfburn"] floating enable # switch to workspace with urgent window automatically for_window [urgent=latest] focus # reload the configuration file bindsym $mod+Shift+c reload # restart i3 inplace (preserves your layout/session, can be used to upgrade i3) bindsym $mod+Shift+r restart # exit i3 (logs you out of your X session) bindsym $mod+Shift+e exec "i3-nagbar -t warning -m 'You pressed the exit shortcut. Do you really want to exit i3? This will end your X session.' -b 'Yes, exit i3' 'i3-msg exit'" # Set shut down, restart and locking features bindsym $mod+0 mode "$mode_system" set $mode_system (l)ock, (e)xit, switch_(u)ser, (s)uspend, (h)ibernate, (r)eboot, (Shift+s)hutdown mode "$mode_system" { bindsym l exec --no-startup-id i3exit lock, mode "default" bindsym s exec --no-startup-id i3exit suspend, mode "default" bindsym u exec --no-startup-id i3exit switch_user, mode "default" bindsym e exec --no-startup-id i3exit logout, mode "default" bindsym h exec --no-startup-id i3exit hibernate, mode "default" bindsym r exec --no-startup-id i3exit reboot, mode "default" bindsym Shift+s exec --no-startup-id i3exit shutdown, mode "default" # exit system mode: "Enter" or "Escape" bindsym Return mode "default" bindsym Escape mode "default" } # Resize window (you can also use the mouse for that) bindsym $mod+r mode "resize" mode "resize" { # These bindings trigger as soon as you enter the resize mode # Pressing left will shrink the window’s width. # Pressing right will grow the window’s width. # Pressing up will shrink the window’s height. # Pressing down will grow the window’s height. bindsym j resize shrink width 5 px or 5 ppt bindsym k resize grow height 5 px or 5 ppt bindsym l resize shrink height 5 px or 5 ppt bindsym semicolon resize grow width 5 px or 5 ppt # same bindings, but for the arrow keys bindsym Left resize shrink width 5 px or 5 ppt bindsym Down resize grow height 5 px or 5 ppt bindsym Up resize shrink height 5 px or 5 ppt bindsym Right resize grow width 5 px or 5 ppt # exit resize mode: Enter or Escape bindsym Return mode "default" bindsym Escape mode "default" } # Lock screen bindsym $mod+9 exec --no-startup-id blurlock # Autostart applications exec --no-startup-id /uslib/polkit-gnome/polkit-gnome-authentication-agent-1 exec --no-startup-id nitrogen --restore; sleep 1; compton -b # exec --no-startup-id manjaro-hello exec --no-startup-id nm-applet exec --no-startup-id xfce4-power-manager exec --no-startup-id pamac-tray exec --no-startup-id clipit exec --no-startup-id picom # exec --no-startup-id blueman-applet # exec_always --no-startup-id sbxkb exec --no-startup-id start_conky_maia # exec --no-startup-id start_conky_green exec --no-startup-id xautolock -time 10 -locker blurlock exec_always --no-startup-id ff-theme-util exec_always --no-startup-id fix_xcursor # Color palette used for the terminal ( ~/.Xresources file ) # Colors are gathered based on the documentation: # https://i3wm.org/docs/userguide.html#xresources # Change the variable name at the place you want to match the color # of your terminal like this: # [example] # If you want your bar to have the same background color as your # terminal background change the line 362 from: # background #14191D # to: # background $term_background # Same logic applied to everything else. set_from_resource $term_background background set_from_resource $term_foreground foreground set_from_resource $term_color0 color0 set_from_resource $term_color1 color1 set_from_resource $term_color2 color2 set_from_resource $term_color3 color3 set_from_resource $term_color4 color4 set_from_resource $term_color5 color5 set_from_resource $term_color6 color6 set_from_resource $term_color7 color7 set_from_resource $term_color8 color8 set_from_resource $term_color9 color9 set_from_resource $term_color10 color10 set_from_resource $term_color11 color11 set_from_resource $term_color12 color12 set_from_resource $term_color13 color13 set_from_resource $term_color14 color14 set_from_resource $term_color15 color15 # Start i3bar to display a workspace bar (plus the system information i3status if available) bar { i3bar_command i3bar status_command i3status position bottom ## please set your primary output first. Example: 'xrandr --output eDP1 --primary' # tray_output primary # tray_output eDP1 bindsym button4 nop bindsym button5 nop # font xft:URWGothic-Book 11 strip_workspace_numbers yes colors { background #222D31 statusline #F9FAF9 separator #ff9a1f # border backgr. text focused_workspace #ff9a1f #ff9a1f #292F34 active_workspace #595B5B #353836 #FDF6E3 inactive_workspace #595B5B #222D31 #EEE8D5 binding_mode #16a085 #2C2C2C #F9FAF9 urgent_workspace #16a085 #FDF6E3 #E5201D } } # hide/unhide i3status bar bindsym $mod+m bar mode toggle # Theme colors # class border backgr. text indic. child_border client.focused #ff9a1f #ff9a1f #000000 #ff9a1f client.focused_inactive #2F3D44 #2F3D44 #1ABC9C #454948 client.unfocused #2F3D44 #2F3D44 #1ABC9C #454948 client.urgent #CB4B16 #FDF6E3 #1ABC9C #268BD2 client.placeholder #000000 #0c0c0c #ffffff #000000 client.background #2B2C2B ############################# ### settings for i3-gaps: ### ############################# # Set inneouter gaps gaps inner 0 gaps outer 0 # Additionally, you can issue commands with the following syntax. This is useful to bind keys to changing the gap size. # gaps inner|outer current|all set|plus|minus  # gaps inner all set 10 # gaps outer all plus 5 # Smart gaps (gaps used if only more than one container on the workspace) smart_gaps on # Smart borders (draw borders around container only if it is not the only container on this workspace) # on|no_gaps (on=always activate and no_gaps=only activate if the gap size to the edge of the screen is 0) smart_borders on # Press $mod+Shift+g to enter the gap mode. Choose o or i for modifying outeinner gaps. Press one of + / - (in-/decrement for current workspace) or 0 (remove gaps for current workspace). If you also press Shift with these keys, the change will be global for all workspaces. set $mode_gaps Gaps: (o) outer, (i) inner set $mode_gaps_outer Outer Gaps: +|-|0 (local), Shift + +|-|0 (global) set $mode_gaps_inner Inner Gaps: +|-|0 (local), Shift + +|-|0 (global) bindsym $mod+Shift+g mode "$mode_gaps" mode "$mode_gaps" { bindsym o mode "$mode_gaps_outer" bindsym i mode "$mode_gaps_inner" bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_inner" { bindsym plus gaps inner current plus 5 bindsym minus gaps inner current minus 5 bindsym 0 gaps inner current set 0 bindsym Shift+plus gaps inner all plus 5 bindsym Shift+minus gaps inner all minus 5 bindsym Shift+0 gaps inner all set 0 bindsym Return mode "default" bindsym Escape mode "default" } mode "$mode_gaps_outer" { bindsym plus gaps outer current plus 5 bindsym minus gaps outer current minus 5 bindsym 0 gaps outer current set 0 bindsym Shift+plus gaps outer all plus 5 bindsym Shift+minus gaps outer all minus 5 bindsym Shift+0 gaps outer all set 0 bindsym Return mode "default" bindsym Escape mode "default" } 
.bashrc
# # ~/.bashrc # [[ $- != *i* ]] && return colors() { local fgc bgc vals seq0 printf "Color escapes are %s\n" '\e[${value};...;${value}m' printf "Values 30..37 are \e[33mforeground colors\e[m\n" printf "Values 40..47 are \e[43mbackground colors\e[m\n" printf "Value 1 gives a \e[1mbold-faced look\e[m\n\n" # foreground colors for fgc in {30..37}; do # background colors for bgc in {40..47}; do fgc=${fgc#37} # white bgc=${bgc#40} # black vals="${fgc:+$fgc;}${bgc}" vals=${vals%%;} seq0="${vals:+\e[${vals}m}" printf " %-9s" "${seq0:-(default)}" printf " ${seq0}TEXT\e[m" printf " \e[${vals:+${vals+$vals;}}1mBOLD\e[m" done echo; echo done } [ -r /usshare/bash-completion/bash_completion ] && . /usshare/bash-completion/bash_completion # Change the window title of X terminals case ${TERM} in xterm*|rxvt*|Eterm*|aterm|kterm|gnome*|interix|konsole*) PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\007"' ;; screen*) PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/\~}\033\\"' ;; esac use_color=true # Set colorful PS1 only on colorful terminals. # dircolors --print-database uses its own built-in database # instead of using /etc/DIR_COLORS. Try to use the external file # first to take advantage of user additions. Use internal bash # globbing instead of external grep binary. safe_term=${TERM//[^[:alnum:]]/?} # sanitize TERM match_lhs="" [[ -f ~/.dir_colors ]] && match_lhs="${match_lhs}$(<~/.dir_colors)" [[ -f /etc/DIR_COLORS ]] && match_lhs="${match_lhs}$(/dev/null \ && match_lhs=$(dircolors --print-database) [[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true if ${use_color} ; then # Enable colors for ls, etc. Prefer ~/.dir_colors #64489 if type -P dircolors >/dev/null ; then if [[ -f ~/.dir_colors ]] ; then eval $(dircolors -b ~/.dir_colors) elif [[ -f /etc/DIR_COLORS ]] ; then eval $(dircolors -b /etc/DIR_COLORS) fi fi if [[ ${EUID} == 0 ]] ; then PS1='\[\033[01;31m\][\h\[\033[01;36m\] \W\[\033[01;31m\]]\$\[\033[00m\] ' else PS1='\[\033[01;32m\][\[email protected]\h\[\033[01;37m\] \W\[\033[01;32m\]]\$\[\033[00m\] ' fi alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' else if [[ ${EUID} == 0 ]] ; then # show [email protected] when we don't have colors PS1='\[email protected]\h \W \$ ' else PS1='\[email protected]\h \w \$ ' fi fi unset use_color safe_term match_lhs sh alias cp="cp -i" # confirm before overwriting something alias df='df -h' # human-readable sizes alias free='free -m' # show sizes in MB alias np='nano -w PKGBUILD' alias more=less xhost +local:root > /dev/null 2>&1 complete -cf sudo # Bash won't get SIGWINCH if another process is in the foreground. # Enable checkwinsize so that bash will check the terminal size when # it regains control. #65623 # http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11) shopt -s checkwinsize shopt -s expand_aliases # export QT_SELECT=4 # Enable history appending instead of overwriting. #139609 shopt -s histappend # # # ex - archive extractor # # usage: ex  ex () { if [ -f $1 ] ; then case $1 in *.tar.bz2) tar xjf $1 ;; *.tar.gz) tar xzf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar x $1 ;; *.gz) gunzip $1 ;; *.tar) tar xf $1 ;; *.tbz2) tar xjf $1 ;; *.tgz) tar xzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1;; *.7z) 7z x $1 ;; *) echo "'$1' cannot be extracted via ex()" ;; esac else echo "'$1' is not a valid file" fi } #Custom programs export PATH="/home/uusePrograms/pycharm-community-2020.2.1/bin:$PATH" # Custom scritps export PATH="/home/useCustomScripts:$PATH" 

submitted by Amuoeba8 to i3wm [link] [comments]

Best TTRPGs for Play by Post Games.

Hey all, I'm trying to put together a list of good potential systems for play by post games. I'm going to be running one for some of my friends, so I'm trying to think of what would need to change and what would not work.
  1. It likely can't have heavy emphasis on turns, especially combat. Tactical combat wouldn't work pretty much at all. This rules out some of my favorite systems like Lancer and D&D 4e. Instead, combat should flow naturally as part of the game's narrative, which is something Powered by the Apocalypse does well.
  2. Character creation and customization should be flexible but simple to track. This likely rules out another of my favorite systems, Mutants and Masterminds. It's mostly narrative driven for actual play but the character creation process is quite literally college level algebra at times and may be hard to keep track of everything.
  3. If there are dice rolls, they should be simple without too many multipliers and should allow for something more flexible than a success or failure binary. Again Powered by the Apocalypse handles this well, though so does Fate and a few other narrative driven systems. I would say dice pool systems like the Storyteller system and variations of it would not suit things well here. Preferably, they should have rolls that rely mostly on one die type, like d6 systems.
Those are the main three factors I'm trying to keep in mind. Meanwhile, here are the systems I'm considering.
Powered by the Apocalypse: This seems like the most well suited for play by post. The actions are more narrative driven, with the rolls being based off what makes most sense for the story rather than being set by gameplay. Lots of options exist here too, Dungeon World, Masks, Urban Shadows, and Monsterhearts tend to be the most well known, but it allows for plenty of variation. Many of the games have SRDs, free pdfs, or pay what you want pdfs, making it easy to get a hold of the content.
Forged in the Dark: This is essentially a variation of Powered by the Apocalypse with some deeper mechanics to it. There are a few major differences though, the biggest and most interesting in my opinion being the heavier focus on group play, with essentially leveling up the team and receiving group actions alongside the individual player ones. Blades in the Dark and Beamsaber are probably the most well known right now. SRDs, free pdfs, and pay what you want pdfs exist for some of the more popular versions of this system.
Ryutama: This is basically if a Ghibli film was a ttrpg. There's very low focus on combat, though when fights do happen it follows what could be described as a traditional JRPG aesthetic. There's initiative, a front row, and a back row. Despite that, since combat is meant to be extremely rare, I feel it's still well suited for play by post. Individual character classes are very simple in the abilities they get but again there's a big focus on group play. Not only do players need to work together and pool resources to accomplish larger tasks, but there's a mechanic where their actions power up a special NPC/DMPC called a dragon that gives the party major bonuses based on the campaign type. (An exploration campaign might get a dragon that lets them cross great distances while one that is battle focused might get a powerful attack they can call on.) The pdf for the book is available for about $14 on itch.io and drivethrurpg.
Fate: This is one of the most well known and "original" narrative driven games. There's a simpler version called Accelerated as well as one with more stats and skills called Core. The Fate point system is great, letting you get bonuses or rerolls similar somewhat to D&D's advantage system, but also being able to be used to bring in narrative effects. For instance, finding a contact in town or deciding you know there's a secret passage somewhere nearby. While it's a cool system I worry it may be too open ended with no set stunts or traits and instead leaving the players and GM to determine which ones make sense for the characters. An SRD is available that has all the rules for Core, Accelerated, and some other versions of the game.
These are the systems I'm familiar with. I've heard of others, like Into the Odd, Maze Rats, White Hack, and Black Hack, but have no experience with them and didn't feel right saying anything that may affect opinions. If others have played these games and can provide thoughts on them, please do! What systems have you found work best for Play by Post? What aspects do you think need to be kept in mind or changed between a standard session and PbP? Please share your thoughts and opinions here.
submitted by Ignisiel to rpg [link] [comments]

List of major desktop Linux issues

These are issues that are not specific to a few distros or DEs, but that apply almost universally. It also includes only things that can actually be fixed on Linux's side. Major applications not supporting Linux is bad, but only the creators of these applications can fix that.
Note that this list is work in progress / subject to change. Put your suggestions in the comments.
Incomplete documentation
Lots of distros have bad or incomplete documentation. For example, Linux Mint's old pdf guides are somewhat outdated and its current online documentation has literally just 4 articles.
Confusing filesystem hierarchy
What do the cryptic three letter names stand for? Does /etc mean "etcetera" and /dev "developer"? Why aren't user directories in /usr? Is /media the place to put my videos in? These are some of the questions a new user might have glancing at the folders in /.
Having an app's files scattered across different directories instead of being in a folder or AppDir like how Windows/macOS do it is inconvenient. Worse, there are a billion of different locations for the binaries (at least the /usr merge is finally improving that).
Gobolinux and historically MoonOS are examples of distros with a more logical and user-friendly filesystem hierarchy.
Package managers
Package managers have several problems:
There are a few solutions to this, notably AppImage and Flatpak, but they need more adoption and support. (Snap is an option too, but it's locked-in and proprietary server-side.)
(Of course, some users will still prefer package managers for whatever reason, so they should remain an option for those who really want them.)
Inconsistency
People love to criticize Windows 10 for having two places for settings and inconsistent visuals, but is having the default applications be a mix of apps with GNOME-style headerbars mixed in with apps that use normal titlebars, as is the case in eg. Ubuntu and Linux Mint really any better? Distros having their default apps come from a variety of DEs is a problem in general.
Elitist community
Not really a problem with Linux itself, but it's annoying and gives Linux a bad image.
Guides use terminals
Again, not a problem with Linux itself, but this can give newbies the impression that terminals are the only way to do stuff in Linux.
Missing or incomplete GUIs
Oftentimes GUIs are missing, forcing the user to use the terminal. Sometimes, you will even have to use the terminal to install an application, as is the case with Brave browser. Also, applications usually don't show launch errors in the GUI, making the user wonder why the app they doubleclicked isn't starting.
Marketing
Yeah, we all know that Linux marketing isn't the greatest. However, what some people miss is that marketing goes beyond merely making people aware of a system. One thing effective marketing should focus on is tarnishing Linux's image as an unintuitive, complicated system where you need to use the terminal for basic tasks. Of course, it might be necessary to fix the 2 issues above first.
Also, distro websites should try to market their features better, with clear descriptions and attractive screenshots (Ubuntu MATE sets a good example here imo) and make it simple to actually download the distro (*cough* Debian *cough*)
submitted by emanresu_yzal1 to linuxusability [link] [comments]

AJ ALMENDINGER

glimpse into the future of Roblox

Our vision to bring the world together through play has never been more relevant than it is now. As our founder and CEO, David Baszucki (a.k.a. Builderman), mentioned in his keynote, more and more people are using Roblox to stay connected with their friends and loved ones. He hinted at a future where, with our automatic machine translation technology, Roblox will one day act as a universal translator, enabling people from different cultures and backgrounds to connect and learn from each other.
During his keynote, Builderman also elaborated upon our vision to build the Metaverse; the future of avatar creation on the platform (infinitely customizable avatars that allow any body, any clothing, and any animation to come together seamlessly); more personalized game discovery; and simulating large social gatherings (like concerts, graduations, conferences, etc.) with tens of thousands of participants all in one server. We’re still very early on in this journey, but if these past five months have shown us anything, it’s clear that there is a growing need for human co-experience platforms like Roblox that allow people to play, create, learn, work, and share experiences together in a safe, civil 3D immersive space.
Up next, our VP of Developer Relations, Matt Curtis (a.k.a. m4rrh3w), shared an update on all the things we’re doing to continue empowering developers to create innovative and exciting content through collaboration, support, and expertise. He also highlighted some of the impressive milestones our creator community has achieved since last year’s RDC. Here are a few key takeaways:
And lastly, our VP of Engineering, Technology, Adam Miller (a.k.a. rbadam), unveiled a myriad of cool and upcoming features developers will someday be able to sink their teeth into. We saw a glimpse of procedural skies, skinned meshes, more high-quality materials, new terrain types, more fonts in Studio, a new asset type for in-game videos, haptic feedback on mobile, real-time CSG operations, and many more awesome tools that will unlock the potential for even bigger, more immersive experiences on Roblox.

Vibin’

Despite the virtual setting, RDC just wouldn’t have been the same without any fun party activities and networking opportunities. So, we invited special guests DJ Hyper Potions and cyber mentalist Colin Cloud for some truly awesome, truly mind-bending entertainment. Yoga instructor Erin Gilmore also swung by to inspire attendees to get out of their chair and get their body moving. And of course, we even had virtual rooms dedicated to karaoke and head-to-head social games, like trivia and Pictionary.
Over on the networking side, Team Adopt Me, Red Manta, StyLiS Studios, and Summit Studios hosted a virtual booth for attendees to ask questions, submit resumes, and more. We also had a networking session where three participants would be randomly grouped together to get to know each other.

What does Roblox mean to you?

We all know how talented the Roblox community is from your creations. We’ve heard plenty of stories over the years about how Roblox has touched your lives, how you’ve made friendships, learned new skills, or simply found a place where you can be yourself. We wanted to hear more. So, we asked attendees: What does Roblox mean to you? How has Roblox connected you? How has Roblox changed your life? Then, over the course of RDC, we incorporated your responses into this awesome mural.
📷
Created by Alece Birnbach at Graphic Recording Studio

Knowledge is power

This year’s breakout sessions included presentations from Roblox developers and staff members on the latest game development strategies, a deep dive into the Roblox engine, learning how to animate with Blender, tools for working together in teams, building performant game worlds, and the new Creator Dashboard. Dr. Michael Rich, Associate Professor at Harvard Medical School and Physician at Boston Children’s Hospital, also led attendees through a discussion on mental health and how to best take care of you and your friends’ emotional well-being, especially now during these challenging times.
📷
Making the Dream Work with Teamwork (presented by Roblox developer Myzta)
In addition to our traditional Q&A panel with top product and engineering leaders at Roblox, we also held a special session with Builderman himself to answer the community’s biggest questions.
📷
Roblox Product and Engineering Q&A Panel

2020 Game Jam

The Game Jam is always one of our favorite events of RDC. It’s a chance for folks to come together, flex their development skills, and come up with wildly inventive game ideas that really push the boundaries of what’s possible on Roblox. We had over 60 submissions this year—a new RDC record.
Once again, teams of up to six people from around the world had less than 24 hours to conceptualize, design, and publish a game based on the theme “2020 Vision,” all while working remotely no less! To achieve such a feat is nothing short of awe-inspiring, but as always, our dev community was more than up for the challenge. I’ve got to say, these were some of the finest creations we’ve seen.
WINNERS
Best in Show: Shapescape Created By: GhettoMilkMan, dayzeedog, maplestick, theloudscream, Brick_man, ilyannna You awaken in a strange laboratory, seemingly with no way out. Using a pair of special glasses, players must solve a series of anamorphic puzzles and optical illusions to make their escape.
Excellence in Visual Art: agn●sia Created By: boatbomber, thisfall, Elttob An obby experience unlike any other, this game is all about seeing the world through a different lens. Reveal platforms by switching between different colored lenses and make your way to the end.
Most Creative Gameplay: Visions of a perspective reality Created By: Noble_Draconian and Spathi Sometimes all it takes is a change in perspective to solve challenges. By switching between 2D and 3D perspectives, players can maneuver around obstacles or find new ways to reach the end of each level.
Outstanding Use of Tech: The Eyes of Providence Created By: Quenty, Arch_Mage, AlgyLacey, xJennyBeanx, Zomebody, Crykee This action/strategy game comes with a unique VR twist. While teams fight to construct the superior monument, two VR players can support their minions by collecting resources and manipulating the map.
Best Use of Theme: Sticker Situation Created By: dragonfrosting and Yozoh Set in a mysterious art gallery, players must solve puzzles by manipulating the environment using a magic camera and stickers. Snap a photograph, place down a sticker, and see how it changes the world.
OTHER TOP PICKS
HONORABLE MENTIONS
For the rest of the 2020 Game Jam submissions, check out the list below:
20-20 Vision | 20/20 Vision | 2020 Vision, A Crazy Perspective | 2020 Vision: Nyon | A Wild Trip! | Acuity | Best Year Ever | Better Half | Bloxlabs | Climb Stairs to 2021 | Double Vision (Team hey apple) | Eyebrawl | Eyeworm Exam | FIRE 2020 | HACKED | Hyperspective | Lucid Scream | Mystery Mansion | New Years at the Museum | New Year’s Bash | Poor Vision | Predict 2020 | RBC News | Retrovertigo | Second Wave | see no evil | Sight Fight | Sight Stealers | Spectacles Struggle | Specter Spectrum | Survive 2020 | The Lost Chicken Leg | The Outbreak | The Spyglass | Time Heist | Tunnel Vision | Virtual RDC – The Story | Vision (Team Freepunk) | Vision (Team VIP People ####) | Vision Developers Conference 2020 | Vision Is Key | Vision Perspective | Vision Racer | Visions | Zepto
And last but not least, we wanted to give a special shout out to Starboard Studios. Though they didn’t quite make it on time for our judges, we just had to include Dave’s Vision for good measure. 📷
Thanks to everyone who participated in the Game Jam, and congrats to all those who took home the dub in each of our categories this year. As the winners of Best in Show, the developers of Shapescape will have their names forever engraved on the RDC Game Jam trophy back at Roblox HQ. Great work!

‘Til next year

And that about wraps up our coverage of the first-ever digital RDC. Thanks to all who attended! Before we go, we wanted to share a special “behind the scenes” video from the 2020 RDC photoshoot.
Check it out:
It was absolutely bonkers. Getting 350 of us all in one server was so much fun and really brought back the feeling of being together with everyone again. That being said, we can’t wait to see you all—for real this time—at RDC next year. It’s going to be well worth the wait. ‘Til we meet again, my friends.
© 2020 Roblox Corporation. All Rights Reserved.

Improving Simulation and Performance with an Advanced Physics Solver

August

05, 2020

by chefdeletat
PRODUCT & TECH
📷In mid-2015, Roblox unveiled a major upgrade to its physics engine: the Projected Gauss-Seidel (PGS) physics solver. For the first year, the new solver was optional and provided improved fidelity and greater performance compared to the previously used spring solver.
In 2016, we added support for a diverse set of new physics constraints, incentivizing developers to migrate to the new solver and extending the creative capabilities of the physics engine. Any new places used the PGS solver by default, with the option of reverting back to the classic solver.
We ironed out some stability issues associated with high mass differences and complex mechanisms by the introduction of the hybrid LDL-PGS solver in mid-2018. This made the old solver obsolete, and it was completely disabled in 2019, automatically migrating all places to the PGS.
In 2019, the performance was further improved using multi-threading that splits the simulation into jobs consisting of connected islands of simulating parts. We still had performance issues related to the LDL that we finally resolved in early 2020.
The physics engine is still being improved and optimized for performance, and we plan on adding new features for the foreseeable future.

Implementing the Laws of Physics

📷
The main objective of a physics engine is to simulate the motion of bodies in a virtual environment. In our physics engine, we care about bodies that are rigid, that collide and have constraints with each other.
A physics engine is organized into two phases: collision detection and solving. Collision detection finds intersections between geometries associated with the rigid bodies, generating appropriate collision information such as collision points, normals and penetration depths. Then a solver updates the motion of rigid bodies under the influence of the collisions that were detected and constraints that were provided by the user.
📷
The motion is the result of the solver interpreting the laws of physics, such as conservation of energy and momentum. But doing this 100% accurately is prohibitively expensive, and the trick to simulating it in real-time is to approximate to increase performance, as long as the result is physically realistic. As long as the basic laws of motion are maintained within a reasonable tolerance, this tradeoff is completely acceptable for a computer game simulation.

Taking Small Steps

The main idea of the physics engine is to discretize the motion using time-stepping. The equations of motion of constrained and unconstrained rigid bodies are very difficult to integrate directly and accurately. The discretization subdivides the motion into small time increments, where the equations are simplified and linearized making it possible to solve them approximately. This means that during each time step the motion of the relevant parts of rigid bodies that are involved in a constraint is linearly approximated.
📷📷
Although a linearized problem is easier to solve, it produces drift in a simulation containing non-linear behaviors, like rotational motion. Later we’ll see mitigation methods that help reduce the drift and make the simulation more plausible.

Solving

📷
Having linearized the equations of motion for a time step, we end up needing to solve a linear system or linear complementarity problem (LCP). These systems can be arbitrarily large and can still be quite expensive to solve exactly. Again the trick is to find an approximate solution using a faster method. A modern method to approximately solve an LCP with good convergence properties is the Projected Gauss-Seidel (PGS). It is an iterative method, meaning that with each iteration the approximate solution is brought closer to the true solution, and its final accuracy depends on the number of iterations.
📷
This animation shows how a PGS solver changes the positions of the bodies at each step of the iteration process, the objective being to find the positions that respect the ball and socket constraints while preserving the center of mass at each step (this is a type of positional solver used by the IK dragger). Although this example has a simple analytical solution, it’s a good demonstration of the idea behind the PGS. At each step, the solver fixes one of the constraints and lets the other be violated. After a few iterations, the bodies are very close to their correct positions. A characteristic of this method is how some rigid bodies seem to vibrate around their final position, especially when coupling interactions with heavier bodies. If we don’t do enough iterations, the yellow part might be left in a visibly invalid state where one of its two constraints is dramatically violated. This is called the high mass ratio problem, and it has been the bane of physics engines as it causes instabilities and explosions. If we do too many iterations, the solver becomes too slow, if we don’t it becomes unstable. Balancing the two sides has been a painful and long process.

Mitigation Strategies

📷A solver has two major sources of inaccuracies: time-stepping and iterative solving (there is also floating point drift but it’s minor compared to the first two). These inaccuracies introduce errors in the simulation causing it to drift from the correct path. Some of this drift is tolerable like slightly different velocities or energy loss, but some are not like instabilities, large energy gains or dislocated constraints.
Therefore a lot of the complexity in the solver comes from the implementation of methods to minimize the impact of computational inaccuracies. Our final implementation uses some traditional and some novel mitigation strategies:
  1. Warm starting: starting with the solution from a previous time-step to increase the convergence rate of the iterative solver
  2. Post-stabilization: reprojecting the system back to the constraint manifold to prevent constraint drift
  3. Regularization: adding compliance to the constraints ensuring a solution exists and is unique
  4. Pre-conditioning: using an exact solution to a linear subsystem, improving the stability of complex mechanisms
Strategies 1, 2 and 3 are pretty traditional, but 3 has been improved and perfected by us. Also, although 4 is not unheard of, we haven’t seen any practical implementation of it. We use an original factorization method for large sparse constraint matrices and a new efficient way of combining it with the PGS. The resulting implementation is only slightly slower compared to pure PGS but ensures that the linear system coming from equality constraints is solved exactly. Consequently, the equality constraints suffer only from drift coming from the time discretization. Details on our methods are contained in my GDC 2020 presentation. Currently, we are investigating direct methods applied to inequality constraints and collisions.

Getting More Details

Traditionally there are two mathematical models for articulated mechanisms: there are reduced coordinate methods spearheaded by Featherstone, that parametrize the degrees of freedom at each joint, and there are full coordinate methods that use a Lagrangian formulation.
We use the second formulation as it is less restrictive and requires much simpler mathematics and implementation.
The Roblox engine uses analytical methods to compute the dynamic response of constraints, as opposed to penalty methods that were used before. Analytics methods were initially introduced in Baraff 1989, where they are used to treat both equality and non-equality constraints in a consistent manner. Baraff observed that the contact model can be formulated using quadratic programming, and he provided a heuristic solution method (which is not the method we use in our solver).
Instead of using force-based formulation, we use an impulse-based formulation in velocity space, originally introduced by Mirtich-Canny 1995 and further improved by Stewart-Trinkle 1996, which unifies the treatment of different contact types and guarantees the existence of a solution for contacts with friction. At each timestep, the constraints and collisions are maintained by applying instantaneous changes in velocities due to constraint impulses. An excellent explanation of why impulse-based simulation is superior is contained in the GDC presentation of Catto 2014.
The frictionless contacts are modeled using a linear complementarity problem (LCP) as described in Baraff 1994. Friction is added as a non-linear projection onto the friction cone, interleaved with the iterations of the Projected Gauss-Seidel.
The numerical drift that introduces positional errors in the constraints is resolved using a post-stabilization technique using pseudo-velocities introduced by Cline-Pai 2003. It involves solving a second LCP in the position space, which projects the system back to the constraint manifold.
The LCPs are solved using a PGS / Impulse Solver popularized by Catto 2005 (also see Catto 2009). This method is iterative and considers each individual constraints in sequence and resolves it independently. Over many iterations, and in ideal conditions, the system converges to a global solution.
Additionally, high mass ratio issues in equality constraints are ironed out by preconditioning the PGS using the sparse LDL decomposition of the constraint matrix of equality constraints. Dense submatrices of the constraint matrix are sparsified using a method we call Body Splitting. This is similar to the LDL decomposition used in Baraff 1996, but allows more general mechanical systems, and solves the system in constraint space. For more information, you can see my GDC 2020 presentation.
The architecture of our solver follows the idea of Guendelman-Bridson-Fedkiw, where the velocity and position stepping are separated by the constraint resolution. Our time sequencing is:
  1. Advance velocities
  2. Constraint resolution in velocity space and position space
  3. Advance positions
This scheme has the advantage of integrating only valid velocities, and limiting latency in external force application but allowing a small amount of perceived constraint violation due to numerical drift.
An excellent reference for rigid body simulation is the book Erleben 2005 that was recently made freely available. You can find online lectures about physics-based animation, a blog by Nilson Souto on building a physics engine, a very good GDC presentation by Erin Catto on modern solver methods, and forums like the Bullet Physics Forum and GameDev which are excellent places to ask questions.

In Conclusion

The field of game physics simulation presents many interesting problems that are both exciting and challenging. There are opportunities to learn a substantial amount of cool mathematics and physics and to use modern optimizations techniques. It’s an area of game development that tightly marries mathematics, physics and software engineering.
Even if Roblox has a good rigid body physics engine, there are areas where it can be improved and optimized. Also, we are working on exciting new projects like fracturing, deformation, softbody, cloth, aerodynamics and water simulation.
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
This blog post was originally published on the Roblox Tech Blog.
© 2020 Roblox Corporation. All Rights Reserved.

Using Clang to Minimize Global Variable Use

July

23, 2020

by RandomTruffle
PRODUCT & TECH
Every non-trivial program has at least some amount of global state, but too much can be a bad thing. In C++ (which constitutes close to 100% of Roblox’s engine code) this global state is initialized before main() and destroyed after returning from main(), and this happens in a mostly non-deterministic order. In addition to leading to confusing startup and shutdown semantics that are difficult to reason about (or change), it can also lead to severe instability.
Roblox code also creates a lot of long-running detached threads (threads which are never joined and just run until they decide to stop, which might be never). These two things together have a very serious negative interaction on shutdown, because long-running threads continue accessing the global state that is being destroyed. This can lead to elevated crash rates, test suite flakiness, and just general instability.
The first step to digging yourself out of a mess like this is to understand the extent of the problem, so in this post I’m going to talk about one technique you can use to gain visibility into your global startup flow. I’m also going to discuss how we are using this to improve stability across the entire Roblox game engine platform by decreasing our use of global variables.

Introducing -finstrument-functions

Nothing excites me more than learning about a new obscure compiler option that I’ve never had a use for before, so I was pretty happy when a colleague pointed me to this option in the Clang Command Line Reference. I’d never used it before, but it sounded very cool. The idea being that if we could get the compiler to tell us every time it entered and exited a function, we could filter this information through a symbolizer of some kind and generate a report of functions that a) occur before main(), and b) are the very first function in the call-stack (indicating it’s a global).
Unfortunately, the documentation basically just tells you that the option exists with no mention of how to use it or if it even actually does what it sounds like it does. There’s also two different options that sound similar to each other (-finstrument-functions and -finstrument-functions-after-inlining), and I still wasn’t entirely sure what the difference was. So I decided to throw up a quick sample on godbolt to see what happened, which you can see here. Note there are two assembly outputs for the same source listing. One uses the first option and the other uses the second option, and we can compare the assembly output to understand the differences. We can gather a few takeaways from this sample:
  1. The compiler is injecting calls to __cyg_profile_func_enter and __cyg_profile_func_exit inside of every function, inline or not.
  2. The only difference between the two options occurs at the call-site of an inline function.
  3. With -finstrument-functions, the instrumentation for the inlined function is inserted at the call-site, whereas with -finstrument-functions-after-inlining we only have instrumentation for the outer function. This means that when using-finstrument-functions-after-inlining you won’t be able to determine which functions are inlined and where.
Of course, this sounds exactly like what the documentation said it did, but sometimes you just need to look under the hood to convince yourself.
To put all of this another way, if we want to know about calls to inline functions in this trace we need to use -finstrument-functions because otherwise their instrumentation is silently removed by the compiler. Sadly, I was never able to get -finstrument-functions to work on a real example. I would always end up with linker errors deep in the Standard C++ Library which I was unable to figure out. My best guess is that inlining is often a heuristic, and this can somehow lead to subtle ODR (one-definition rule) violations when the optimizer makes different inlining decisions from different translation units. Luckily global constructors (which is what we care about) cannot possibly be inlined anyway, so this wasn’t a problem.
I suppose I should also mention that I still got tons of linker errors with -finstrument-functions-after-inlining as well, but I did figure those out. As best as I can tell, this option seems to imply –whole-archive linker semantics. Discussion of –whole-archive is outside the scope of this blog post, but suffice it to say that I fixed it by using linker groups (e.g. -Wl,–start-group and -Wl,–end-group) on the compiler command line. I was a bit surprised that we didn’t get these same linker errors without this option and still don’t totally understand why. If you happen to know why this option would change linker semantics, please let me know in the comments!

Implementing the Callback Hooks

If you’re astute, you may be wondering what in the world __cyg_profile_func_enter and __cyg_profile_func_exit are and why the program is even successfully linking in the first without giving undefined symbol reference errors, since the compiler is apparently trying to call some function we’ve never defined. Luckily, there are some options that allow us to see inside the linker’s algorithm so we can find out where it’s getting this symbol from to begin with. Specifically, -y should tell us how the linker is resolving . We’ll try it with a dummy program first and a symbol that we’ve defined ourselves, then we’ll try it with __cyg_profile_func_enter .
[email protected]:~/src/sandbox$ cat instr.cpp int main() {} [email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -Wl,-y -Wl,main instr.cpp /usbin/../lib/gcc/x86_64-linux-gnu/crt1.o: reference to main /tmp/instr-5b6c60.o: definition of main
No surprises here. The C Runtime Library references main(), and our object file defines it. Now let’s see what happens with __cyg_profile_func_enter and -finstrument-functions-after-inlining.
[email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -finstrument-functions-after-inlining -Wl,-y -Wl,__cyg_profile_func_enter instr.cpp /tmp/instr-8157b3.o: reference to __cyg_profile_func_enter /lib/x86_64-linux-gnu/libc.so.6: shared definition of __cyg_profile_func_enter
Now, we see that libc provides the definition, and our object file references it. Linking works a bit differently on Unix-y platforms than it does on Windows, but basically this means that if we define this function ourselves in our cpp file, the linker will just automatically prefer it over the shared library version. Working godbolt link without runtime output is here. So now you can kind of see where this is going, however there are still a couple of problems left to solve.
  1. We don’t want to do this for a full run of the program. We want to stop as soon as we reach main.
  2. We need a way to symbolize this trace.
The first problem is easy to solve. All we need to do is compare the address of the function being called to the address of main, and set a flag indicating we should stop tracing henceforth. (Note that taking the address of main is undefined behavior[1], but for our purposes it gets the job done, and we aren’t shipping this code, so ¯\_(ツ)_/¯). The second problem probably deserves a little more discussion though.

Symbolizing the Traces

In order to symbolize these traces, we need two things. First, we need to store the trace somewhere on persistent storage. We can’t expect to symbolize in real time with any kind of reasonable performance. You can write some C code to save the trace to some magic filename, or you can do what I did and just write it to stderr (this way you can pipe stderr to some file when you run it).
Second, and perhaps more importantly, for every address we need to write out the full path to the module the address belongs to. Your program loads many shared libraries, and in order to translate an address into a symbol, we have to know which shared library or executable the address actually belongs to. In addition, we have to be careful to write out the address of the symbol in the file on disk. When your program is running, the operating system could have loaded it anywhere in memory. And if we’re going to symbolize it after the fact we need to make sure we can still reference it after the information about where it was loaded in memory is lost. The linux function dladdr() gives us both pieces of information we need. A working godbolt sample with the exact implementation of our instrumentation hooks as they appear in our codebase can be found here.

Putting it All Together

Now that we have a file in this format saved on disk, all we need to do is symbolize the addresses. addr2line is one option, but I went with llvm-symbolizer as I find it more robust. I wrote a Python script to parse the file and symbolize each address, then print it in the same “visual” hierarchical format that the original output file is in. There are various options for filtering the resulting symbol list so that you can clean up the output to include only things that are interesting for your case. For example, I filtered out any globals that have boost:: in their name, because I can’t exactly go rewrite boost to not use global variables.
The script isn’t as simple as you would think, because simply crawling each line and symbolizing it would be unacceptably slow (when I tried this, it took over 2 hours before I finally killed the process). This is because the same address might appear thousands of times, and there’s no reason to run llvm-symbolizer against the same address multiple times. So there’s a lot of smarts in there to pre-process the address list and eliminate duplicates. I won’t discuss the implementation in more detail because it isn’t super interesting. But I’ll do even better and provide the source!
So after all of this, we can run any one of our internal targets to get the call tree, run it through the script, and then get output like this (actual output from a Roblox process, source file information removed):
excluded_symbols = [‘.\boost.*’]* excluded_modules = [‘/usr.\’]* /uslib/x86_64-linux-gnu/libLLVM-9.so.1: 140 unique addresses InterestingRobloxProcess: 38928 unique addresses /uslib/x86_64-linux-gnu/libstdc++.so.6: 1 unique addresses /uslib/x86_64-linux-gnu/libc++.so.1: 3 unique addresses Printing call tree with depth 2 for 29276 global variables. __cxx_global_var_init.5 (InterestingFile1.cpp:418:22) RBX::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp.:415:0) __cxx_global_var_init.19 (InterestingFile2.cpp:183:34) (anonymous namespace)::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp:171:0) __cxx_global_var_init.274 (InterestingFile3.cpp:2364:33) RBX::InterestingRobloxClass3::InterestingRobloxClass3()
So there you have it: the first half of the battle is over. I can run this script on every platform, compare results to understand what order our globals are actually initialized in in practice, then slowly migrate this code out of global initializers and into main where it can be deterministic and explicit.

Future Work

It occurred to me sometime after implementing this that we could make a general purpose profiling hook that exposed some public symbols (dllexport’ed if you speak Windows), and allowed a plugin module to hook into this dynamically. This plugin module could filter addresses using whatever arbitrary logic that it was interested in. One interesting use case I came up for this is that it could look up the debug information, check if the current address maps to the constructor of a function local static, and write out the address if so. This effectively allows us to gain a deeper understanding of the order in which our lazy statics are initialized. The possibilities are endless here.

Further Reading

If you’re interested in this kind of thing, I’ve collected a couple of my favorite references for this kind of topic.
  1. Various: The C++ Language Standard
  2. Matt Godbolt: The Bits Between the Bits: How We Get to main()
  3. Ryan O’Neill: Learning Linux Binary Analysis
  4. Linkers and Loaders: John R. Levine
  5. https://eel.is/c++draft/basic.exec#basic.start.main-3
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
submitted by jaydenweez to u/jaydenweez [link] [comments]

Binary Options 60 Seconds Indicator %100 Winning Trades ... SIMPLE 1 HOUR TRADING TRADING SYSTEM - YouTube Best Binary Options Strategy 2020 - 2 Min Strategy Live ... Best 5min Binary Options Trading Strategy - YouTube Price Action Binary Options Signals That Work - YouTube Binary options Free strategy that works - from 100 to ... High ITM strategy for Binary options that works THE TRUTH ABOUT BINARY OPTIONS - YouTube AgimatFX Binary Options System - YouTube 60 Seconds binary options strategy 99 - 100% Winning (100% ...

Trading system that works pdf. 05.06.2017 angamarec 2 Comments . Visit our blog to see all performance videos for YTD. Trading futures and options involves substantial risk of loss and is system suitable for all investors. Get Started In Algorithmic System Today. It can also be installed and loaded onto the Tradestation platform. Trading futures involves substantial risk of loss pdf is not ... binary options scam works - once you know how it works it will be easy for you to identify it when it is being tried on you. In a typical scam the broker manipulate the movements of the underlying asset, usually upon the expiry time, in a way that the outcome will be in favor of the broker. For . example, If the binary option is supposed to expire at, say, 11h30 and at that time it is "in the ... Viper binary system has crawled the whole sound inside it as there are the best resources for the trading in and trading out. There are many people in the judgment that actually fails in the trading system because of the not knowing the term “Binary Records”. Binary records are the strong commands that work whenever a user enter in the trading panel and find the best accuracy in the forex ... Trinity binary options trading strategy is not against the trend despite the fact that the main technical indicator is RSI here and the main trading signal to be used in this strategy is the moment when RSI crosses the level of 50. Lots of traders do not want to stand against the trend and there are lots of beginners who have lost their money because of a common feature of any technical ... Download it here for free but first take a look at how it works. The UOP system consists of 8 trading indicators, some basic and some advanced indicators. Just download the file below and add them all to the Metatrader 4 platform. Make sure to attach the UOP template (also included in the zip file). You will get so [...] Continue reading» Simple Bands Binary Options System With CCI. I've been ... This trading binary options for dummies PDF features the in and outs of BO as well as strategies needed to achieve success in trading binaries. Here are some of the topics you'll discover while reading the book: The single most critical factor to binary options strategy success - ignore it at your own perils. Binary options trading system. Inside outside bar Binary Options Strategy. 60 seconds binary options strategy that works. Best binary options indicators for people who aren’t knowledgeable about 60 second binary options signals how I normally exchange the 15-minute expires in the 5-minute graph, Profitable binary options trading strategies combine only the most efficient choices. This is the technique I developed trading binary options. This is the technique I want to share with you today. This technique generated 150% return by risking only 5% per trade “End of Day”options Something your broker doesn’t want you to know. The Martingale System – Long regarded as the best of the Binary Option Robot’s three possible trading systems, this is a variable trading system. After any unsuccessful trade that the Binary Option Robot makes, it will increase the previous trading amount and if the previous trade was successful, it will lower the trading amount the next time it places a trade. The way binary options works is that if you’re right on the market direction, you’ll get a percentage of what you initially invested in. Typically between 75% and 95% and in some cases, even 100% of your initial investment, depending on your Binary Options Broker. Supposing we’re wrong on the trade, then we’re going to lose the whole amount invested in the short term. We have made a ...

[index] [7029] [27910] [21586] [2967] [26461] [13963] [25726] [11308] [15042] [20237]

Binary Options 60 Seconds Indicator %100 Winning Trades ...

60 Seconds binary options strategy 99 - 100% Winning (100% profit guaranteed) 90 - 95% Winning Binary Options Strategy in case of correct prediction - 1 to 5... Best Binary Options Brokers for this Strategy: 1. 💲💹IQ Option FREE DEMO: http://www.cryptobinarylivingway.com/IQOption1 2. 💲💹Pocket Option FREE DEMO: http https://binaryoptionsbeat.com/ Contact me at: [email protected] Here I try to explain How to grow your deposit fast in few days and earn money from b... Join My Academy 💰Income-Mentor-Box Academy (Signup) http://www.incomementorbox.com/ 👉Income Mentor Box read FULL Review http://www.investing-news.net/i... Are binary options a good idea? If you're thinking about trading binary options, watch this video first. Check out our FREE training for traders https://bi... The aim of this video is to teach you how to become profitable in Binary options and earn money without falling in the scam. It is No loss style. You will always make profit at the end of the day ... https://agimat-trading-system.com/ How to trade Binary Options with Agimat FX 2016 and getting at least 90% accuracy. Copyright: The Agimat FX™ name and logo... OLYMP TRADE broker account opening link below: https://goo.gl/urftNR this video contains two trading examples with explanation. watch and learn how to trade ... All about Trading in Forex and Binary Option Marked. SIMPLE 1 HOUR TRADING TRADING SYSTEM ----- Subscribe! https://YouTube.com/sasanfx1 🔔 Click... One minute strategy to trade price action trading signals. Watch how I use simple trading rules to increase win rate. Get 10x Trading System: https://trading...

http://binary-optiontrade.inanoutdewant.tk