Eat the congregation, together,
smite them afraid, blast vessels.
O ye dead life, mercy, semblance
of a kind, a measure of shadow.
But, they had no prophet, neighbors
and friends, children of fate, troubled
the Others, remember them not, no-name,
wilderness sacrifices, sore consumed.
Desolation came and passed, Death,
bare the enemy, dead, desolate,
good and great together into the land,
begat headstones, seeds unto the earth.
Father of water, turn thyself,
out to sea, high between, separate,
sing of the living, purify mouths
and soul, fetch their inheritance.
Surge, rain on the wise borderlands,
pull the stopper, flush the tidal womb,
from the unleavened, and the unclean,
reborn, another offering to be devoured.
Another A.I. assist using a recently trained neural network using the King James version (KJV) as the corpus. Since this is the first neural network I tried myself, I learned many valuable lessons. It took 2.5 days the first go around, and the result was unusable because of all the newline characters in the original text. I didn’t realize that the text would have to be pre-processed or what it would entail. I plan on writing a post about the process of making Project Gutenberg texts usable as writing with the machine co-authors.
Also, there is a point to be made about the inherent class obstacles in learning and using neural networks. The differences from having a dedicated machine running the right hardware is the difference between waiting days to train a new model or a few hours. On the other end, speed also limits how big of a data set you are willing to start with. The KJV is about 5MB, and it took 2 days. Robin Sloan’s pre-trained text is around 123MB. Spending 24+ days to train a model is a serious barrier to entry.