Shangrila
Mind stream
Friday, March 03, 2023
Friday, September 06, 2019
Monday, November 12, 2018
Wednesday, July 18, 2018
Using the Docker registry REST interface
Docker registry is a server application that stores and distributes Docker images. This note explain how to use curl, and optionally jq, to query the registry for information about container images and their history.
The server's endpoint is assumed to be in a shell variable, such as:
MY_REGISTRY=http://some.host:5000,
To list repositories:
curl https://MY.REGISTRY/v2/_catalog
...
Note: local development registries often do not have proper TLS certificates. For trusted sources and assuming the associated risks, one can bypass certificates and use an insecure connection by adding the curl flag -k/--insecure to the commands shown below.
To list available tags for a given image (e.g. myteam/myapp):
curl https://MY.REGISTRY/v2/myrepo/myapp/tags/list
To retrieve the manifest of a given image: (e.g. myteam/myapp):
curl https://MY.REGISTRY/v2/myrepo/myapp/manifests/latest
A common application of manifest queries is to catalog images and use lineage and other metadata on those images for automation. Such metadata is found in image manifests, which are hard to inspect visually due to their complexity. Using a JSON parser like jq can be used to produce a readable report.
Image registry summary report
The following script will produce a simple summary listing every tag for a given image along with SHA identifier and creation date. Temporary files are used to cache intermediate results.#=== Configure repo and image REPO_URL=https://my.registry IMAGE_REPO=myrepo/myimage BASE_URL=$REPO_URL/v2/$IMAGE_REPO
#=== Download tags
curl -ks $BASE_URL/tags/list > /tmp/tags #=== Download each tag's manifest, all concatenated for tag in $(jq -r .tags[] /tmp/tags) doThe output can be sorted by timestamp thus:curl -ks $BASE_URL/manifests/$tag done > /tmp/manifests #=== Parse manifests and print metadata cat /tmp/manifests | jq -r '{tag:.tag, info:.history[0].v1Compatibility | fromjson | {created:.created, sha:.config.Image}}' | tee /tmp/repo.data.json
cat /tmp/repo.data.json | jq -s 'sort_by(.info.created)'
Thursday, July 12, 2018
List all Firefox bookmarks
Firefox stores bookmark data in profile folders. For Linux, this is typically ~/.mozilla/firefox (Windows: %APPDATA%\Mozilla, Mac: ~/Library/Mozilla). Each profile has a dedicated directory, e.g. mwad0hks.default, under which a file named places.sqlite holds a database containing bookmarks. To retrieve every bookmark in the profile:
cd ~/.mozilla/firefox/<profile>.default # replace <profile>
sqlite3 places.sqlite # open prompt in sqlite client
select * from moz_places; -- list bookmarks
cd ~/.mozilla/firefox/<profile>.default # replace <profile>
sqlite3 places.sqlite # open prompt in sqlite client
select * from moz_places; -- list bookmarks
Friday, May 25, 2018
A simple Elasticsearch index
Create and populate a simple Elasticsearch index
curl -XPUT http://dev01:9200/test \
-H 'Content-type:application/json' -d'{"
settings": {
"number_of_shards":1,
"number_of_replicas":1
},
"mappings": {
"type": {
"properties": {
"f1": {
"type":"string"
},
"f2": {
"type":"string"
},
"f3": {
"type":"string"
}
}
}
}
}'
Wednesday, May 23, 2018
Kubernetes development with minikube
Minikube's default memory parameters are fine for running small stacks, but more complex deployments, such as those including telemetry components like Prometheus or ELK, will demand larger resource allocations.
$ minikube delete
$ minikube start
Show Kubernetes dashboard on minikube:
$ minikube dashboard --url
without --url, the command above command launches a local browser.
List services in minikube:
$ minikube service list
|-------------|----------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|-----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://192.168.99.102:30000 |
|-------------|----------------------|-----------------------------|
Minikube ports are bound to local IP addresses exposed by VM virtual net adapters. When accessing the minkube host remotely, you can either use a SSH tunnel, or use NAT in your VM runtime. For VirtualBox, e.g.:
$ vboxmanage controlvm "minikube" natpf1 "k8s,tcp,,30000,,30000"
To inspect the resulting change, check the NAT rules for the NIC:
$ vboxmanage showvminfo minikube
[...]
NIC 1: MAC: 080027D238E5, Attachment: NAT, Cable connected: on, Trace: off [...]
[...]
NIC 1 Rule(2): name = k8s, protocol = tcp, host ip = , host port = 30000, guest ip = , guest port = 30000
NIC 1 Rule(3): name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 35283, guest ip = , guest port = 22
Tuesday, April 24, 2018
Set/unset environment variables from the JVM (Groovy)
Dirty hack recommended only for testing. Only ever tested in Linux; may not work at all on Windows.
def setEnv(String key, String value) {
try {
// Dirty hack to set environment variables from JVM
// Only tested in Linux - may not work in Windows at all
Map<String, String> env = System.getenv()
Class<?> cl = env.getClass()
Field field = cl.getDeclaredField("m")
field.setAccessible(true)
Map<String, String> writableEnv = (Map<String, String>) field.get(env)
writableEnv.put(key, value)
} catch (Exception e) {
throw new IllegalStateException("Failed to set environment variable", e)
}
}
def unsetEnv(String key) {
try {
// Dirty hack to unset environment variables from JVM
// Only tested in Linux - may not work in Windows at all
Map<String, String> env = System.getenv()
Class<?> cl = env.getClass()
Field field = cl.getDeclaredField("m")
field.setAccessible(true)
Map<String, String> writableEnv = (Map<String, String>) field.get(env)
writableEnv.remove(key)
} catch (Exception e) {
throw new IllegalStateException("Failed to set environment variable", e)
}
}
Wednesday, February 07, 2018
Build emacs from source (CentOS/RHEL 7)
I often have development systems where I need my favorite editor: Emacs Live. Since Emacs Live requires Emacs 24.4+, and as of writing the latest version in the standard yum repositories is 24.3, rather than fish around for a potentially dangerous RPM, I strongly prefer to build from source.
Most of my remote development hosts do not typically have a build chain installed, so to set one up:
sudo yum group install "Development Tools"
Download and extract the intended Emacs sources, e.g.:
wget http://mirror.clarkson.edu/gnu/emacs/emacs-25.3.tar.gz
tar xvf emacs-25.3.tar.gz
Configure the build:
cd emacs-25.3
./configure
# use --with-x=no with configure to exclude the X11 UI from the build, which is ideal for text-only remote systems
# use --with-x=no with configure to exclude the X11 UI from the build, which is ideal for text-only remote systems
I often, however, run into the following:
...
configure: error: The required function 'tputs' was not found in any library.
The following libraries were tried (in order):
libtinfo, libncurses, libterminfo, libtermcap, libcurses
Please try installing whichever of these libraries is most appropriate
for your system, together with its header files.
For example, a libncurses-dev(el) or similar package.
Which can be fixed by installing ncurses:
sudo yum install ncurses-devel
Retry the configure step:
./configure --with-x=no
And proceed to build:
make
sudo make install
Finally, launch (with high color terminal):
export TERM=xterm-256color && emacs
Sunday, December 17, 2017
Linux audio
killall jackdbus
Until ps -ef | grep jack shows no jack processes
pulseaudio --kill
aplay -l
aplay -D hdmi alert.wav
alsamixer
Friday, November 17, 2017
Teradata tricks
select * from dbc.SessionInfo;
---
Collect Full Statistics
SELECT SYSLIB. AbortSessions(1 ,'dbc', 0,'Y' ,'Y') ; -- All sessions for a user
SELECT SYSLIB. AbortSessions(1 ,'DBC', 123340,'Y' ,'Y') ; -- Specific session
Param1: Hostno
Param2:UserName
Param3: SessionNo
Param4: LogoffSessions
Param5: UserOverride
SELECT * FROM TABLE (MonitorSession (1, '*',0 )) AS dt WHERE PEstate <> 'Idle' or AMPstate <> 'Idle' ;
SELECT TOP 10 username , clientAddr, defaultDatabase , CacheFlag ,
collectTimeStamp ,
StartTime, ElapsedTime , NumResultRows, AMPCPUTime, ERRORCODE ,
StatementType , QueryText , NumOfActiveAmps, SpoolUsage , ReqIoKB ,
ReqPhysIO
FROM dbc. qrylog
WHERE defaultDatabase LIKE 'TPCDS%'
ORDER BY StartTime DESC;
BEGIN QUERY LOGGING
WITH ALL LIMIT SQLTEXT= 0
ON ALL;
SELECT TOP 10 *
FROM dbc. QryLogTDWM
ORDER BY collectTimeStamp DESC
SELECT TOP 100 *
FROM dbc. QryLogV
WHERE StatementType IN ('Insert' ,'Update', 'Delete')
ORDER BY collectTimeStamp DESC
SELECT TOP 100 *
FROM dbc. QryLogEvents
ORDER BY collectTimeStamp DESC
---
$ sudo pdestate -a
root's password:
PDE state: DOWN/TDMAINT
root's password:
PDE state: DOWN/TDMAINT
$ sudo /etc/init.d/tpa start
Teradata Database Initiator service is starting...
Teradata Database Initiator service started successfully.
Teradata Database Initiator service is starting...
Teradata Database Initiator service started successfully.
$ sudo pdestate -a
PDE state is RUN/STARTED.
DBS state is 1/3: DBS Startup - Starting AMP Partitions
PDE state is RUN/STARTED.
DBS state is 1/3: DBS Startup - Starting AMP Partitions
$ sudo pdestate -a
PDE state is RUN/STARTED.
DBS state is 4: Logons are enabled - Users are logged on
PDE state is RUN/STARTED.
DBS state is 4: Logons are enabled - Users are logged on
---
COLLECT STATISTICS ON tablename COLUMN columnname;
COLLECT STATISTICS ON tablename INDEX ( columnname);
COLLECT STATISTICS ON tablename INDEX ( col1, col2 , ...);
HELP STATISTICS tablename;
COLLECT STATISTICS tablename; -- refresh table statistics
DROP STATISTICS ON tablename ;
DIAGNOSTIC HELPSTATS ON FOR SESSION;
Then run query EXPLAIN. Optimizer will return something like the following at the end of the query plan:
BEGIN RECOMMENDED STATS FOR FINAL PLAN->
-- "COLLECT STATISTICS COLUMN (I_ITEM_ID) ON TPCDS1000G.item"
(High Confidence)
-- "COLLECT STATISTICS COLUMN (D_DATE) ON TPCDS1000G.date_dim"
(High Confidence)
-- "COLLECT STATISTICS COLUMN (S_STORE_ID) ON TPCDS1000G.store"
(High Confidence)
TPCDS1000G.store_sales" (High Confidence)
-- "COLLECT STATISTICS COLUMN (P_PROMO_SK) ON TPCDS1000G.promotion"
(High Confidence)
-- "COLLECT STATISTICS COLUMN (D_DATE_SK) ON TPCDS1000G.date_dim"
(High Confidence)
<- END RECOMMENDED STATS FOR FINAL PLAN
BEGIN RECOMMENDED STATS FOR OTHER PLANS ->
-- "COLLECT STATISTICS COLUMN (PARTITION) ON
TPCDS1000G.store_sales" (High Confidence)
<- END RECOMMENDED STATS FOR OTHER PLANS
Collect Full Statistics
- Non-indexed columns used in predicates
- All NUSIs with an uneven distribution of values *
- NUSIs used in join steps
- USIs/UPIs if used in non-equality predicates (range constraints)
- Most NUPIs (see below for a fuller discussion of NUPI statistic collection)
- Full statistics always need to be collected on relevant columns and indexes on small tables (less than 100 rows per AMP)
Can Rely on Random AMP Sampling
- USIs or UPIs if only used with equality predicates
- NUSIs with an even distribution of values
- NUPIs that display even distribution, and if used for joining, conform to assumed uniqueness (see Point #2 under “Other Considerations” below)
- See “Other Considerations” for additional points related to random AMP sampling
Option to use USING SAMPLE
- Unique index columns
- Nearly-unique columns or indexes**
Collect Multicolumn Statistics
- Groups of columns that often appear together in conditions with equality predicates, if the first 16 bytes of the concatenated column values are sufficiently distinct. These statistics will be used for single-table estimates.
- Groups of columns used for joins or aggregations, where there is either a dependency or some degree of correlation among them.*** With no multicolumn statistics collected, the optimizer assumes complete independence among the column values. The more that the combination of actual values are correlated, the greater the value of collecting multicolumn statistics will be.
Basic HBase in Clojure
(println (seq (.getURLs (java.lang.ClassLoader/getSystemClassLoader))) ;; get class path in REPL
# in REPL
(require '[clojure-hbase.core :as hbase])
(import [org.apache.hadoop.hbase HBaseConfiguration HConstants KeyValue])
(import [org.apache.hadoop.hbase.client HTablePool Get Put Delete Scan Result RowLock HTableInterface])
(hbase/set-config (hbase/make-config {
:zookeeper.znode.parent "/hbase-unsecure"
:hbase.zookeeper.property.clientPort "2181"
:hbase.cluster.distributed "true"
:hbase.zookeeper.quorum "hdp005-3,hdp005-21,hdp005-23"
}
))
(hbase/table "tweets-test")
Clojure REPL and tricks
lein repl :start :port 10010
(use 'clojure.repl 'clojure.pprint)
(setq nrepl-popup-stacktraces-in-repl nil)
(setq nrepl-auto-select-error-buffer nil)
(setq nrepl-auto-select-error-buffer nil)
Zoom
You can use `
C-x C-+
’ and ‘C-x C--’
(‘text-scale-adjust’
) to increase or decrease the buffer text size (`C-+
’ or ‘C--’
to repeat).
Very important: add cider plugin in project.clj
(defproject lucy "0.1.0-SNAPSHOT"
:description "FIXME: write description"
:url "http://example.com/FIXME"
:license {:name "Eclipse Public License"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [
[org.clojure/clojure "1.6.0"]
[clucy "0.4.0"]
]
:plugins [[cider/cider-nrepl "0.6.0"]]
)
:description "FIXME: write description"
:url "http://example.com/FIXME"
:license {:name "Eclipse Public License"
:url "http://www.eclipse.org/legal/epl-v10.html"}
:dependencies [
[org.clojure/clojure "1.6.0"]
[clucy "0.4.0"]
]
:plugins [[cider/cider-nrepl "0.6.0"]]
)
In CIDER
C-c M-n: Set REPL namespace from buffer
C-x C-e: Eval preceding line
C-c C-k: compile buffer
C-x C-c: eval region
C-c C-c: abort eval
- M-: Read a single Emacs Lisp expression in the minibuffer, evaluate it, and print the value in the echo area (
eval-expression
). C-x h
selects the entire buffer.C-M-\
reindents the selected region.- C-M-@ select s-expression
- M-. Jump to the definition of a symbol. If invoked with a prefix argument, or no symbol is found at point, prompt for a symbol.
- M-@ (
mark-word
) puts the mark at the end of the next word - Use
M-<
to move to the beginning of the buffer, andM->
to move to the end C-v
to scroll down, andM-v
to scroll up
- M-d Kill up to the end of a word (
kill-word
).
- M-\ Delete spaces and tabs around point (
delete-horizontal-space
) - M-<SPC> Delete spaces and tabs around point, leaving one space (
just-one-space
)
- M-^ Join two lines by deleting the intervening newline, along with any indentation following it (
delete-indentation
)
- C-t Transpose two characters (
transpose-chars
). - M-t Transpose two words (
transpose-words
). - C-M-t Transpose two balanced expressions (
transpose-sexps
). - C-x C-t Transpose two lines (
transpose-lines
). M-x auto-revert-tail-mode
(tail a file)
You can hide the *nrepl-connection* and *nrepl-server* buffers from appearing in some buffer switching commands like switch-to-buffer(C-x b) like this:
(setq nrepl-hide-special-buffers t)
(setq nrepl-hide-special-buffers t)
- M-% string <RET> newstring <RET>
- Replace some occurrences of string with newstring.
- C-M-% regexp <RET> newstring <RET>
- Replace some matches for regexp with newstring.
<SPC>
- to replace the occurrence with newstring.
<DEL>
- to skip to the next occurrence without replacing this one.
, (Comma)
- to replace this occurrence and display the result. You are then asked for another input character to say what to do next. Since the replacement has already been made, <DEL> and <SPC> are equivalent in this situation; both move to the next occurrence.You can type C-r at this point (see below) to alter the replaced text. You can also type C-x u to undo the replacement; this exits the
query-replace
, so if you want to do further replacement you must use C-x <ESC> <ESC> <RET> to restart (see Repetition).
<RET>
- to exit without doing any more replacements.
. (Period)
- to replace this occurrence and then exit without searching for more occurrences.
!
- to replace all remaining occurrences without asking again.
Y (Upper-case)
- to replace all remaining occurrences in all remaining buffers in multi-buffer replacements (like the Dired `Q' command which performs query replace on selected files). It answers this question and all subsequent questions in the series with "yes", without further user interaction.
N (Upper-case)
- to skip to the next buffer in multi-buffer replacements without replacing remaining occurrences in the current buffer. It answers this question "no", gives up on the questions for the current buffer, and continues to the next buffer in the sequence.
^
- to go back to the position of the previous occurrence (or what used to be an occurrence), in case you changed it by mistake or want to reexamine it.
Print object members
(doseq [m (.getMethods (type index))] (println m))
(use 'clojure.reflect 'clojure.pprint)
(pprint (reflect "hello"))
{:bases
#{java.io.Serializable java.lang.Comparable java.lang.Object
java.lang.CharSequence},
:flags #{:public :final},
:members
#{{:name valueOf,
:return-type java.lang.String,
:declaring-class java.lang.String,...
#{java.io.Serializable java.lang.Comparable java.lang.Object
java.lang.CharSequence},
:flags #{:public :final},
:members
#{{:name valueOf,
:return-type java.lang.String,
:declaring-class java.lang.String,...
List classpath
(defn classpath []
(
seq
(.getURLs (java.lang.ClassLoader/getSystemClassLoader))))
Expand macros
(macroexpand '(time (print "timing")))
;; or better yet
(clojure.pprint/pprint (macroexpand '(time 1)))
List members in namespace
(require '[clojure-hbase.core :as hbase])
(dir clojure-hbase.core)
Show function source
user=> (source hbase/table)
(defn table
"Gets an HTable from the open HTablePool by name."
[table-name]
(io!
(.getTable (htable-pool) (to-bytes table-name))))
(defn table
"Gets an HTable from the open HTablePool by name."
[table-name]
(io!
(.getTable (htable-pool) (to-bytes table-name))))
Find elements by regex match in collection
(filter #(re-find #"zoo" (key %)) (seq (hbase/make-config nil)))
clojure.core/all-ns
Returns a sequence of all namespaces.
Returns a sequence of all namespaces.
List files
(take 10 (file-seq (clojure.java.io/file ".")))
Subscribe to:
Posts (Atom)