Browser based text analysis that handles pre-processing, analyzing, and visualization in an easy to use web interface. (Written in vanilla JavaScript, charts/graphs use D3.js & Chart.js, table uses Handsontable). Application is broken out into (5) node.js microservices.
A browser based toolkit to identify stylistic signatures characteristic of Latin prose and verse using a combination of quantitative stylometry and supervised machine learning.
A browser based sequence alignment toolkit for the detection of anagrams in Latin literature.
A tool for identifying verbal resemblances in literature written in JavaScript that uses FuzzySearch & Levenshtein Distance.
- Convert Monolithic Python Service to Modern Spring Boot Microservices
architecture.
- Implement CI/CD with Jenkins, Docker, & Kubernetes.
- Support React.js UI Framework
for Brand Challenges.
- Utilize tools such as Celery, Rabbitmq, Python, MongoDB, Prometheus, Kubernetes,
Confluence, jira, and bitbucket.
- Meet with product & design teams for Sprint planning & retros.
- Meet with technical leadership for high-level architecture choices.
- Lead team of frontend, backend developers, Android developers, & iOS developers.
- Implement external APIS such as Segment, Snowflake, Amplitude, & Sales Force Marketing
Cloud
- Research literature using machine learning, natural language processing,
bioinformatics, and systems biology.
- Implement automated CI/CD with Gitlab & Docker
-
Re-write filum tool which uses a technique derived from computational
biology known as sequence alignment, which considers the character-by-character similarity of
phrases in JavaScript.
- Collaborate with the web team on UI/UX.
- Automate large literature text file
analysis.
- Proficient in PHP/MySQL | JavaScript/MongoDB | Meteor/Nodejs
- Develop/Debug/Test custom
applications for business clients
- Use R for data analysis and hypothesis testing
-
Deploy application code to AWS EC2/RDS
Instances
- Commit code changes to Github via Git
- Create visual charts using D3 & tableau
-
Create Tensorflow Models in Python
- Optimize NVIDIA CUDA Code & OpenMP for GPU computing.
- Oversee technicians for VoIP Conversion of 20k phones
- Modify/Configure Cisco Switches to
accommodate new VoIP Phones
- Converted spreadsheet to efficient PHP/MySQL Data Entry
Workflow
- Reduced survey time by 50%
& deployments by 30%
- Assist in planning of best practices for site surveys
- Ensure
surveys and deployments ran on schedule
- Migrate site surveys from Excel to a web based
application
- Install & Configure CentOS/Ubuntu AWS Instances
- Full-Service I.T. Provider to local
Austin small businesses
- Move physical servers to hosted solutions in AWS
- Address trouble tickets opened by
customers via email
- Utilized tools such as VMWare, Git, HeidiSQL, Putty
- Install &
Configure Cisco Routers, Switches & Access
Points
- Implemented VOIP network using Cisco Appliances
- Configure and Manage Load Balancers
and SSL Encryption Devices
- Facilitate communication and develop relationships for many cross functional teams
- Determine the strategic needs of IT management for enterprises.
- Assist customers in
designing network/storage system architecture.
- Demonstrate expert knowledge of vendor
specific technologies including VMWare, Citrix,
and Cisco.
- Deploy new Dell laptops to all of members of the GSA
- Backup all user data to fiber
networked NAS drives
- Ensure encryption was set on all user machines
- Ensure computers could log onto
encrypted network
- Ensure all user data was restored
Planning of the next influenza pandemic is of vital concern to many public health officials. The aim is to use an agent based model of influenza in the United States to estimate mortality should a pandemic occur present day. Models of infectious disease epidemics have been proven useful in understanding the dynamics of disease transmission, vaccination strategies, and are constantly used as an apparatus to support decisions made by public health officials. Using an agent-based model of Influenza A virus infection, a simulation is created to simulate the effects of two strains of influenza A causing a pandemic in the United States.
Profiling of intertexutal relationships is foundational for literary study. We demonstrate Filum, a user-friendly tool that employs character-level sequence alignment to detect verbal parallels with or without lexical overlap. When applied to a database of more than 1,000 intertextual parallels of known significance from the Latin poet Valerius Flaccus, Filum is able to recover more than 80% with reasonable specificity and to identify more than 250 new intertexts previously unrecorded in the scholarship.
This article describes a new quantitative approach to the study of Latin literary genre. Using computational text analysis and supervised machine learning we construct a detailed stylistic profile of the vast majority of extant classical Latin literature and classify works by traditional genre with high accuracy. By examining the statistical basis for these automated classification decisions, we identify salient stylistic characteristics of each genre at the level of syntax and non-content vocabulary. Through a series of case studies, we illustrate how this approach enables both confirmation at scale of long-appreciated stylistic tendencies and identification of unrecognized generic signatures.
Computational stylometry has become an increasingly important aspect of literary criticism, but many humanists lack the technical expertise or language-specific NLP resources required to exploit computational methods. We demonstrate a stylometry toolkit for analysis of Latin literary texts, which is freely available at www.qcrit.org/stylometry. Our toolkit generates data for a diverse range of literary features and has an intuitive point-and-click interface. The features included have proven effective for multiple literary studies and are calculated using custom heuristics without the need for syntactic parsing. As such, the toolkit models one approach to the user-friendly generation of stylometric data, which could be extended to other premodern and non-English languages underserved by standard NLP resources.
Computational stylometry has aided the work of philologists for over 50 years. From simple word counts to the latest use of machine learning for authorship attribution, computation offers the literary critic a wide array of techniques to better understand individual texts and large corpora. To date, these methods have largely been accessible to specialists possessing a background in programming and statistics. The Quantitative Criticism Lab has now designed a user-friendly toolkit that will allow humanists with no prior training in the digital humanities to obtain a wide range of philological data about most classical texts and to perform sophisticated quantitative analyses—all using a simple point-and-click interface. This presentation will demonstrate some of the experiments and literary critical insights enabled by the toolkit, and discuss relevant issues of interpretation and statistical analysis.
Published via freeCodeCamp - 05/03/2018
Published via Medium - 03/24/2019
Published via Medium - 03/24/2019
Published via Medium - 05/27/2020
Published via Medium - 08/11/2021
Published via Medium - 04/22/2023
Published via Medium - 04/23/2023