Blog

  • SEO Slugs with Sveltekit

    One of the things that are not built into Sveltekit is handling slugs for url references that are needed for Search Engine Optimization. In the following I will walk through how I implemented slug generation for my app ‘Céillí: The game of Trading‘ which pits different trading guilds against each other in a competition for champion of equity and coin trading, while also providing a platform for Equity discovery, strategy and trading. This article is intended for those already with a basic knowledge of sveltekit, primarily those migrating to this framework from other javascript and LAMP based systems.

    To generate slugs often one would try to alter the ‘name’ of something such as a guild, like ‘Hansa League’ by using javascript to alter the url with dashes so that it becomes ‘hansa-league’ this is problematic in that often a user may introduce extra spaces and trying to re-convert back into a ‘name’ for lookup in the database will not accomodate for different spacing, capitlization, etc. So it is recommended to use a slug column in your database, for my project I am using Postgres db, here is the sql structure for my purposes:

    CREATE TABLE guilds (
        id BIGSERIAL PRIMARY KEY,
        name TEXT NOT NULL,
        description TEXT,
        slug text,
        created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
    );

    To get sveltekit to pick up on this database you need to define it in the ORM you are using for my purposes I am using Drizzle, so that in drizzle schema you will need this:

      // Groups table schema
      export const guildsTable = pgTable("guilds", {
    	id: bigserial('id',{mode: 'number'}).primaryKey(),
    	name: text("name").notNull(),
    	description: text("description"),
    	createdAt: timestamp("created_at", { mode: "string" }).defaultNow(),
    	slug: text("slug").notNull().unique()
    
      });

    Additionally, to not allow users to enter extra spaces, etc if you are using zod for form validation, which is highly recommended, then you will need to put this into your zod schema:

    import { z } from 'zod';
    
    export const guildsSchema = z.object({
    	id: z.string().optional(),
        name: z
            .string({ required_error: 'title is required' })
            .min(1, { message: 'title is required' })
            .trim()
    		.refine(val => !/\s{2,}/.test(val), {
    			message: "No multiple consecutive spaces allowed",
    		  }),
    	description: z
    		.string({ required_error: 'guild is required' })
    		.min(1, { message: 'guild is required' })
    		.trim()
    		.refine(val => !/\s{2,}/.test(val), {
    			message: "No multiple consecutive spaces allowed",
    		  }),
    	createdAt: z.date().optional(),
    });
    
    export type GuildsSchema = typeof guildsSchema;
    

    you will note the special validations for both ‘name’ and ‘description’ to not allow extra spaces.

    To handle creating slugs I have a helper class that I put in my $lib directory:

    // src/lib/generateSlugHelper.ts
    import { nameToSlug } from './slugHelper'; // from earlier
    import { db } from '$lib/server/db'; // however you're accessing your DB
    import { guildsTable } from '$lib/server/database/drizzle-schemas'; // your actual table import
    import { eq } from 'drizzle-orm'; // or your own query helper
    
    export async function generateUniqueSlug(
      name: string,
      exists: (slug: string) => Promise<boolean>
    ): Promise<string> {
      let baseSlug = nameToSlug(name);
      let slug = baseSlug;
      let counter = 1;
    
      while (await exists(slug)) {
        slug = `${baseSlug}-${counter++}`;
      }
    
      return slug;
    }
    
    

    this references slugHelper class:

    // src/lib/slugHelper.ts
    export function nameToSlug(name: string): string {
      return name
        .trim()
        .toLowerCase()
        .replace(/[^a-z0-9\s-]/g, '')
        .replace(/\s+/g, '-')
        .replace(/-+/g, '-');
    }
    
    export function slugToName(slug: string): string {
      return slug
        .trim()
        .toLowerCase()
        .replace(/-+/g, ' ')
        .replace(/\b\w/g, char => char.toUpperCase());
    }
    

    so when a user goes to add a guild, there is actually minimal impact on when you call createGuild() from your schema for guild model which holds the usual controller logic for creation, deletion, updates, etc. the interactions with the slugification is done in the model file rather then in +page.server.ts or +page.svelte files. An example from my model file for createGuild() is this:

    
    //guilds-model.ts
    export const createGuild = async (guildData: Guilds, userId: string) => {
    	try {
    		console.log('Creating guild:', guildData);
    
    		// Generate a unique slug based on the name
    		const slug = await generateUniqueSlugForTable(guildData.name, async (slug) => {
    			const existing = await db
    				.select()
    				.from(guildsTable)
    				.where(eq(guildsTable.slug, slug));
    
    			return existing.length > 0;
    		});
    
    		// Insert the guild
    		const [guild] = await db
    			.insert(guildsTable)
    			.values({ ...guildData, slug })
    			.returning();
    
    		if (!guild) {
    			throw new Error('Guild creation failed');
    		}
    
    		// Insert into user_guilds as Owner
    		await db.insert(userGuildsTable).values({
    			user_id: userId,
    			guild_id: guild.id,
    			role: 'Owner',
    		});
    
    		return guild;
    	} catch (error) {
    		console.error('Error creating guild:', error);
    		throw new Error('Failed to create guild');
    	}
    };
    
    

    this references another helper class, generateUniqueSlugForTable, that references slugHelper.ts file to create the slug:

    
    // src/lib/utils/generateUniqueSlugForTableHelper.ts
    import { nameToSlug } from './slugHelper';
    
    /**
     * Generic unique slug generator for any table/column.
     *
     * @param name The base string to slugify (e.g. title)
     * @param checkExists Function to check if a slug exists (excluding optional record ID)
     * @returns A unique slug string
     */
    export async function generateUniqueSlugForTable(
      name: string,
      checkExists: (slug: string) => Promise<boolean>
    ): Promise<string> {
      const baseSlug = nameToSlug(name);
      let slug = baseSlug;
      let counter = 1;
    
      while (await checkExists(slug)) {
        slug = `${baseSlug}-${counter++}`;
      }
    
      return slug;
    }
    

    which is called by the +page.server.ts file in the process of creating a new Guild.

    const newGuild = await createGuild(my_guild, user_id);

    So that wherever you need to dynamically generate a link you do not reference the ‘name’ column but the slug column which is automatically updated and handled by all the slug helper classes.

    Of course then an issue is what to do if the user updates or edits the name, you will need to update the slug as well which is handled again in the guilds-model.ts file, which again is the place that you handle your controller logic, so that in the +page.server.ts file when you handle updating or editing it calls this function in the models controller logic, guilds-model.ts:

    
    export const editGuild = async (id: string, guildData: UpdateGuilds) => {
    	try {
    		// Step 1: Get the existing record
    		const [existingGuild] = await db
    			.select()
    			.from(guildsTable)
    			.where(eq(guildsTable.id, id));
    
    		if (!existingGuild) {
    			throw new Error(`Guild with id ${id} not found`);
    		}
    
    		// Step 2: Check if title has changed
    		let updatedSlug: string | undefined;
    
    		if (
    			guildData.name &&
    			guildData.name.trim() !== existingGuild.name.trim()
    		) {
    			// Step 3: Generate a new unique slug
    			updatedSlug = await generateUniqueSlugForTable(guildData.name, async (slug) => {
    				const existing = await db
    					.select()
    					.from(guildsTable)
    					.where(and(eq(guildsTable.slug, slug), ne(guildsTable.id, id))); // exclude current record
    
    				return existing.length > 0;
    			});
    		}
    
    		// Step 4: Add slug to update data if needed
    		const updatePayload = {
    			...guildData,
    			...(updatedSlug ? { slug: updatedSlug } : {}),
    		};
    
    		const result = await db
    			.update(guildsTable)
    			.set(updatePayload)
    			.where(eq(guildsTable.id, id))
    			.returning();
    
    		return result.length ? result[0] : null;
    	} catch (error) {
    		console.error(`Error updating guild with id ${id}:`, error);
    		throw new Error('Failed to update guild');
    	}
    };
    
    

    which again calls generateUniqueSlugForTable function in the same named helper class. And there you go, with that you can effectively deal with the creation and maintenance of seo slugs in sveltekit.

    So to recap the files used, we have controller model for the database guilds, the zod schema for form validation, and three helper classes.

    if you follow this methodology you will end up with three helper classes to deal with creating slugs from a field ‘name’ which is similar to ‘title’, etc:

    1. src/lib/generateSlugHelper.ts
    2. src/lib/utils/generateUniqueSlugForTableHelper.ts
    3. rc/lib/slugHelper.ts
  • AI LLM Text Extraction of Financial Data (EPS)

    One of the projects I am working on is a fintech application–Céillí. One of the data points that are presented for the stock trading community is that of Earnings Per Share (EPS). It’s a key financial metric that represents the portion of a company’s profit allocated to each outstanding share of common stock. EPS is calculated by dividing the company’s net income (after preferred dividends, if any) by the average number of outstanding common shares during the period. It’s commonly used to gauge a company’s profitability and is a vital input for valuation metrics like the price-to-earnings (P/E) ratio.

    Below is an example Python script that calculates the Earnings Per Share (EPS) for a commodity-producing company (or any company) and then uses that EPS value to compute the Price-to-Earnings (P/E) ratio.you should have the net income and outstanding shares data to calculate this:

    
    def calculate_eps(net_income: float, num_shares: float) -> float:
        """
        Calculate Earnings Per Share (EPS).
    
        Args:
            net_income (float): The company's net income.
            num_shares (float): The average number of outstanding shares.
    
        Returns:
            float: EPS value.
        """
        if num_shares == 0:
            raise ValueError("Number of shares cannot be zero")
        return net_income / num_shares

    getting EPS is a cumbersome task as the data is not in a freely available API, and those that do have that data charge a hefty fee which can eat into your returns, this informatino is usually contained in transcripts of earnings reports and general market reports in the media.

    For my acquisition of EPS data I rely on spidering news headlines which I also use for sentiment analysis purposes. I noticed that much of the information needed for fundamental analysis, rather then technical analysis, are contained in the news headlines as well as the transcripts of earnings calls where companies present their productive value at quarterly and annual conference calls. Usually, one would rely on REGEX to extract information from text. Now, with the advent of Large Language Models and Generative AI, this task can become alot less cumbersome and automated. One way to do this is to use python and a LLM that is hosted by Huggingface and developed by NuMind, https://huggingface.co/numind/NuExtract-1.5-tiny

    At this link you can find more code samples using python. For my purposes, I used this code to extract EPS from the text. How this basically works is that you use a json model to structure the data and the LLM is able to use this to place key datapoints from the text into this model, which of course could easily be automatically pushed to a database for any UI needs you may have.

    import json
    from transformers import AutoModelForCausalLM, AutoTokenizer
    import torch
    
    device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
    
    
    def predict_NuExtract(model, tokenizer, text, schema, examples=["", "", ""]):
        # Parse and reformat the schema
        schema = json.dumps(json.loads(schema), indent=4)
        input_llm = "<|input|>\n" + schema + "\n"
        
        # Only add examples if they are non-empty valid JSON strings
        for ex in examples:
            if ex.strip():  # only process if not empty
                input_llm += json.dumps(json.loads(ex), indent=4) + "\n"
        
        # Add the text to extract data from
        input_llm += "### Text:\n" + text + "\n<|output|>\n"
        
        # Tokenize and generate output
        input_ids = tokenizer(input_llm, return_tensors="pt", truncation=True, max_length=4000).to(device)
        #output = tokenizer.decode(model.generate(**input_ids)[0], skip_special_tokens=True)
        output = tokenizer.decode(
        model.generate(**input_ids, use_cache=False)[0], skip_special_tokens=True)
        return output.split("<|output|>")[1].split("<|end-output|>")[0]
    
    model = AutoModelForCausalLM.from_pretrained("numind/NuExtract-1.5-tiny", trust_remote_code=True)
    tokenizer = AutoTokenizer.from_pretrained("numind/NuExtract-1.5-tiny", trust_remote_code=True)
    model.to(device)
    #device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")
    model.eval()
    
    text = ["Relmada Therapeutics Q4 2024 GAAP EPS $(0.62) Beats $(0.70) Estimate.",
    "Clearside Biomedical Q4 2024 GAAP EPS $(0.10), Inline, Sales $306.00K Beat $176.67K Estimate.",
    "Argan Q4 2024 GAAP EPS $2.22 Beats $1.15 Estimate, Sales $232.474M Beat $197.500M Estimate.",
    "Plus Therapeutics FY24 EPS $(1.95) Vs. $(4.24) YoY, Grant Revenue $5.8M Up From $4.9M YoY",
    "SeaStar Medical Holding Q4 EPS $(0.90) Misses $(0.89) Estimate, Sales $67.00K Miss $150.00K Estimate.",
    "Pulse Biosciences Q4 EPS $(0.31) Down From $(0.21) YoY.",
    "CalAmp FY 2024 GAAP EPS $(11.04), Inline.",
    "VirTra Q4 2024 GAAP EPS $(0.08) Misses $0.04 Estimate, Sales $5.40M Miss $7.45M Estimate.",
    "Better Choice Q4 EPS $(0.50), Sales $7.2M Up 26% From YoY."]
    
    schema = """{
        "company": "",
        "period": "",
        "eps_data": {
            "eps_type": "",
            "actual_eps": "",
            "eps_estimate": "",
            "eps_result": ""
        },
        "sales_data": {
            "actual_sales": "",
            "sales_estimate": "",
            "sales_result": ""
        }
    }"""
    
    for i in text:
        prediction = predict_NuExtract(model, tokenizer, i, schema) 
        print(prediction)   
    
    
    
    '''
    Output: 
    
    {
        "company": "Relmada Therapeutics",
        "period": "Q4 2024",
        "eps_data": {
            "eps_type": "GAAP",
            "actual_eps": "0.62",
            "eps_estimate": "0.70",
            "eps_result": "$(0.62)"
        },
        "sales_data": {
            "actual_sales": "",
            "sales_estimate": "",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "Clearside Biomedical",
        "period": "Q4 2024",
        "eps_data": {
            "eps_type": "GAAP",
            "actual_eps": "0.10",
            "eps_estimate": "176.67K",
            "eps_result": ""
        },
        "sales_data": {
            "actual_sales": "$306.00K",
            "sales_estimate": "$176.67K",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "Argan",
        "period": "Q4 2024",
        "eps_data": {
            "eps_type": "GAAP",
            "actual_eps": "$2.22",
            "eps_estimate": "$1.15",
            "eps_result": "$232.474M"
        },
        "sales_data": {
            "actual_sales": "$232.474M",
            "sales_estimate": "$197.500M",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "Plus Therapeutics",
        "period": "FY24",
        "eps_data": {
            "eps_type": "EPS",
            "actual_eps": "1.95",
            "eps_estimate": "4.24",
            "eps_result": ""
        },
        "sales_data": {
            "actual_sales": "5.8M",
            "sales_estimate": "",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "SeaStar Medical Holding",
        "period": "Q4",
        "eps_data": {
            "eps_type": "",
            "actual_eps": "0.90",
            "eps_estimate": "0.89",
            "eps_result": "Misses"
        },
        "sales_data": {
            "actual_sales": "$67.00K",
            "sales_estimate": "$150.00K",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "Pulse Biosciences",
        "period": "Q4",
        "eps_data": {
            "eps_type": "EPS",
            "actual_eps": "0.31",
            "eps_estimate": "0.21",
            "eps_result": "Down From"
        },
        "sales_data": {
            "actual_sales": "",
            "sales_estimate": "",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "CalAmp",
        "period": "FY 2024",
        "eps_data": {
            "eps_type": "GAAP",
            "actual_eps": "11.04",
            "eps_estimate": "",
            "eps_result": ""
        },
        "sales_data": {
            "actual_sales": "",
            "sales_estimate": "",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "VirTra",
        "period": "Q4 2024",
        "eps_data": {
            "eps_type": "GAAP",
            "actual_eps": "0.08",
            "eps_estimate": "0.04",
            "eps_result": "Misses"
        },
        "sales_data": {
            "actual_sales": "$5.40M",
            "sales_estimate": "$7.45M",
            "sales_result": ""
        }
    }
    
    Setting `pad_token_id` to `eos_token_id`:151646 for open-end generation.
    
    {
        "company": "Better Choice",
        "period": "Q4",
        "eps_data": {
            "eps_type": "EPS",
            "actual_eps": "0.50",
            "eps_estimate": "",
            "eps_result": ""
        },
        "sales_data": {
            "actual_sales": "$7.2M",
            "sales_estimate": "",
            "sales_result": "Up 26% From YoY"
        }
    }
    
    
    '''

    As one can see results can vary, although one could introduce steps into this basic process to further refine the models or even use case-based models that are specific to different patterns of headlines, as there are set ways that financial data is presented in the market news. Other financial ratios are also reported in the news, so having such an ability can help, such as collecting EBITDA, P/E, etc.

    OpenAI’s chatGPT had this to add to EPS discussion that I found interesting:

    me: is there an optimal eps value or is it relative to each stock? what constitutes a miss?

    ChatGPT said:

    Reasoned about optimal EPS and “Miss” for a couple of seconds

    EPS (Earnings Per Share) isn’t a one-size-fits-all metric—it’s relative to the company, its industry, historical performance, and market expectations. There isn’t an “optimal” EPS value that applies across the board. Instead, investors look at trends, growth rates, and how the current EPS compares to analysts’ forecasts or the company’s own guidance.

    What Constitutes a “Miss”

    A “miss” typically occurs when a company’s reported EPS is below the consensus estimate or its internal guidance. Analysts and investors use these estimates as benchmarks, so if the actual EPS falls short, it’s seen as underperformance—even if the EPS might be strong in absolute terms. The severity of the miss can affect investor sentiment and, consequently, the stock’s price.

    Key Considerations

    • Relative Performance:
      For one company, an EPS of, say, $2.00 might be excellent if the industry average is $1.00, while another company might be expected to have an EPS of $5.00 based on its size and market position.
    • Growth Trends:
      Investors often consider not just the current EPS but also its trend over time. Consistent growth can be more important than a single high or low number.
    • Market Expectations:
      EPS is closely watched around earnings announcements. A miss relative to consensus estimates can trigger a negative reaction, even if the company is profitable overall.

    In summary, the “optimal” EPS is relative and context-dependent, and a miss is typically defined by falling short of pre-determined expectations rather than a fixed numeric benchmark.

    me: if i have the eps for a company how do i get the P/E, what is the price i use?

    ChatGPT said:

    Reasoned for 10 seconds

    The Price/Earnings (P/E) ratio is calculated by dividing the current market price of a company’s stock by its earnings per share (EPS). So if you have the EPS, the price you use is typically the current share price—often the last traded or closing price.

    For example, if a company’s EPS is $2.00 and its current share price is $30.00, then the P/E ratio is calculated as:P/E=Current Share PriceEPS=30.002.00=15\text{P/E} = \frac{\text{Current Share Price}}{\text{EPS}} = \frac{30.00}{2.00} = 15P/E=EPSCurrent Share Price​=2.0030.00​=15

    There are a couple of variations depending on your analysis:

    • Trailing P/E: Uses the current share price and the EPS from the past 12 months.
    • Forward P/E: Uses the current share price and forecasted EPS.

    In most standard cases, you’d use the latest closing price from the market to compute the P/E ratio.

    o3-mini-high

    You said:

    me: how do you get the trailing p/e, do you use daily bars? and how do you get the forecasted EPS?

    ChatGPT said:

    Reasoned for 12 seconds

    Trailing P/E is calculated by dividing the current market price (usually the last closing price) by the earnings per share over the trailing twelve months (TTM EPS). Here’s how it typically works:

    • Trailing P/E:
      • Price: You take the most recent closing price of the stock. While daily bars provide the closing price, you don’t calculate the EPS from daily bars. Instead, you use the EPS figure from the company’s quarterly reports and annualize it (i.e., sum up the last four quarters of reported EPS).
      • EPS: The earnings data comes from the company’s financial reports. For example, if you have EPS for Q1, Q2, Q3, and Q4, you add those together to get the TTM EPS.
      • Calculation: Trailing P/E=Current Stock PriceTTM EPS\text{Trailing P/E} = \frac{\text{Current Stock Price}}{\text{TTM EPS}}Trailing P/E=TTM EPSCurrent Stock Price​
    • Forecasted (Forward) EPS:
      • Source: Forecasted EPS isn’t derived directly from historical price data or daily bars. Instead, it’s typically provided by analysts’ consensus estimates or company guidance. Financial data providers (like Bloomberg, FactSet, or Reuters) compile these forecasts.
      • Usage: You take the current stock price and divide it by the forecasted EPS (often for the next 12 months) to get the forward P/E ratio.

    In summary, daily bars are mainly used to get the current price (closing price) for the calculation. The earnings figure for trailing P/E comes from reported quarterly earnings, and forecasted EPS comes from analysts’ estimates or management’s guidance.

    Here is a development version, although rough in visual style as a prototype, gives you a picture of how you can put this altogether for algorithmic insights into the market.

  • Cognitive Warfare ChatGPT Deepresearch

    I queried OpenAI’s ChatGPT to research Cognitive Warfare here is what it came up with. Here is a link to the original query and response, you will need to scroll down the page a bit past a coding question i asked it before this,
    AI chatgpt generated report, https://chatgpt.com/share/67e082db-d5b8-8003-b2dd-2f4a9f1ac816

    Research completed in 17m · 53 sources

    Cognitive Warfare conceptually turns the human mind into a battlefield, targeting perceptions and decision-making as directly as traditional warfare targets physical assets. Cognitive warfare is an emergent form of conflict that uses information and psychology to “alter the cognition of human targets”, often without those targets’ awareness​

    innovationhub-act.org

    . Unlike conventional kinetic warfare, which relies on physical force, cognitive warfare operates in the intangible realm of beliefs, emotions, and decisions. Its primary goals are to destabilize and to influence – in essence, to “get the enemy to destroy himself from within” by manipulating perceptions and behavior​

    innovationhub-act.org

    innovationhub-act.org. Through coordinated campaigns of propaganda, disinformation, social engineering, and even neurobiological means, cognitive warfare seeks to compel adversaries to act in the aggressor’s interest “without a single shot being fired.” This report provides a detailed overview of cognitive warfare, focusing on the role of automation and artificial intelligence (AI) in its conduct, operational applications in both military and civilian spheres, human–machine teaming and neurotechnological integration, and the attendant ethical, legal, and policy implications.

    Defining Cognitive Warfare and Key Frameworks

    Cognitive warfare has been described as “an unconventional form of warfare that uses digital information tools to alter enemy cognitive processes, exploit mental biases or reflexive thinking, and provoke thought distortions, influence decision-making and hinder actions”

    innovationhub-act.org

    . In other words, it deliberately targets how people perceive, think, and decide, treating perception as a weapon

    moderndiplomacy.eu. This concept overlaps with but goes beyond traditional information warfare, which focuses on controlling information content. Cognitive warfare targets the process of cognition itself – “what individual brains will do with [the] information” – making “a cognitive effect not a by-product of action, but its very objective”

    innovationhub-act.org. As Claverie and du Cluzel (2022) note, cognitive warfare “extends beyond the human consequences of cyber warfare”, integrating cyber means with psychological and social techniques to directly affect the human mind​

    innovationhub-act.org

    innovationhub-act.org. NATO’s Allied Command Transformation similarly defines cognitive warfare as activities to “affect attitudes and behaviors, by influencing, protecting, or disrupting…cognition, to gain an advantage over an adversary,” effectively making “human cognition…a critical realm of warfare”

    act.nato.int

    act.nato.int.

    Key differences from related concepts: Cognitive warfare is related to psychological operations (psyops) and information operations, but it is broader in scope and ambition. Traditional psyops often deliver overt (white) or covert (black) propaganda to influence targets, typically in military contexts. In contrast, cognitive warfare leans heavily on “gray” tactics – ambiguous in origin and deniable – aimed at whole societies​

    innovationhub-act.org

    innovationhub-act.org. It does not rely on overt attribution or immediate tactical outcomes, but on subtle, cumulative effects on public opinion and decision-making. Unlike pure cyber warfare, which *“only” targets computer systems, cognitive warfare targets the human element behind those systems, “utilizing similar tactics [as cyberattacks]…but spreading malevolent information” rather than malware​

    innovationhub-act.org

    innovationhub-act.org. In short, while cyber warfare disrupts infrastructure, cognitive warfare disrupts understanding

    researchgate.net. It also blurs the line between military and civilian targets, often encompassing “whole-of-society manipulation” in pursuit of strategic goals​

    act.nato.int

    act.nato.int. For example, a cognitive attack might involve injecting false narratives into social media to erode public trust, as seen in Russian influence campaigns to “decay public trust towards open information sources” during the Ukraine conflict​

    act.nato.int.

    Frameworks for understanding cognitive warfare have been proposed to classify its goals, methods, and domain. One such framework is the UnCODE system (Ask et al., 2024), a neurocentric taxonomy of cognitive warfare objectives. UnCODE stands for Unplug, Corrupt, disOrganize, Diagnose, Enhance, representing five qualitatively distinct categories of adversarial goals​

    researchgate.net

    researchgate.net. In this model, an attacker might aim to: (1) “Unplug”eliminate the target’s ability to receive or generate information (for instance, silencing or isolating them)​

    researchgate.net; (2) “Corrupt”degrade the target’s information-processing capacity (e.g. through fatigue, confusion, or technical interference)​

    researchgate.net; (3) “disOrganize”bias or distort the target’s inputs and outputs, essentially introducing systematic errors or false perceptions​

    researchgate.net; (4) “Diagnose”monitor and understand the target’s cognitive state and patterns (a reconnaissance step to enable tailored manipulation)​

    researchgate.net; or (5) “Enhance”improve the target’s cognitive capabilities, perhaps to exploit them (for example, feeding someone empowering information or technology to guide their actions favorably)​

    researchgate.net. Notably, the UnCODE framework is “species-agnostic” – it considers both human and non-human cognition as potential targets​

    researchgate.net. This acknowledges that modern adversaries might also attempt to manipulate machine cognition (such as AI systems’ decision loops) alongside human minds. For instance, corrupting the data inputs of an opponent’s AI decision support system can bias its recommendations – a form of cognitive warfare against a non-human “cognitive” target.

    Another important conceptual tool is the analogy to the kill chain framework pioneered in cyber operations. In cybersecurity, Hutchins et al. (2010) introduced the “intrusion kill chain”, describing the stepwise phases of a cyber attack (reconnaissance, weaponization, delivery, exploitation, installation, command-and-control, and actions on objectives)​

    lockheedmartin.com

    . The kill chain highlights that disrupting any step can break the attack. By extension, analysts have begun to consider a cognitive kill chain: the phases an influence operation or cognitive attack might progress through – for example, target analysis (reconnaissance of sociopsychological vulnerabilities), content creation (weaponization of narratives or deepfakes), dissemination (delivery via media channels or bots), penetration of audience mindshare (exploitation of attention and trust), consolidation (installation of false beliefs or confusion), command-and-control (sustaining engagement and steering the narrative), and ultimately behavioral or political effect (actions on objectives). Identifying these stages enables defenders to devise countermeasures at each phase, analogous to cyber defense​

    smallwarsjournal.com

    smallwarsjournal.com. Indeed, “intelligence-driven” cognitive defense – anticipating adversary campaigns and preempting their narratives – is increasingly seen as crucial, just as threat intelligence is in cyber defense​

    smallwarsjournal.com

    smallwarsjournal.com.

    There is also debate around whether cognitive warfare constitutes a distinct “Cognitive Domain” of warfare. Traditionally, NATO and militaries recognize five domains: land, sea, air, space, and cyber. With the rise of cognitive operations, some strategists argue for formally acknowledging a sixth domain focused on the human brain and perception​

    innovationhub-act.org

    . Proponents like Le Guyader (2022) suggest that the cognitive domain overlaps with all others but merits its own doctrine and focus​

    scholar.google.com. However, others caution that carving out a separate human or cognitive domain may be conceptually flawed. Ask and Knox (2023), for instance, “take the perspective that a ‘human domain’ does not align with the trajectory of neuroscience and human evolution” in warfare​

    sto.nato.int. Human cognition is an inherent factor in all domains, they argue, and cognitive warfare by nature permeates and transcends domain boundaries. Rather than a standalone domain, cognitive warfare is an integrative layer that exploits the linkages between physical actions, information, and human thought. This report will use the term “cognitive domain” as a useful shorthand, while recognizing the ongoing debate about its delineation.

    The Role of AI and Automation in Cognitive Warfare

    Modern cognitive warfare is deeply intertwined with automation and artificial intelligence. On the one hand, AI provides powerful new tools for conducting influence operations at scale; on the other, it introduces new targets (AI systems themselves) and challenges for cognitive security. Recent years have seen an explosion of AI-driven propaganda and deception techniques. Adversaries can leverage AI to generate highly persuasive fake content (text, images, video) and deploy botnets – automated accounts mimicking human users – to amplify disinformation, making it increasingly difficult for audiences to separate fact from fiction

    moderndiplomacy.eu

    moderndiplomacy.eu. For example, in May 2023 an AI-generated image of a fake explosion at the Pentagon went viral on social media; it was convincing enough to briefly cause a dip in the U.S. stock market before authorities debunked it​

    mwi.westpoint.edu. This incident starkly demonstrated the “catastrophic potential of AI-driven propaganda to destabilize critical systems”

    mwi.westpoint.edu.

    State actors are actively developing AI-enhanced cognitive warfare capabilities. Russia has incorporated AI into its disinformation “troll farms,” using generative language models to produce more “human-like and persuasive content” for influence campaigns​

    mwi.westpoint.edu

    . In the lead-up to elections, Russian operatives have employed AI to shape social media narratives, aiming to “sway U.S. electoral outcomes, undermine public confidence, and sow discord” – essentially weaponizing AI to magnify the reach and precision of information warfare​

    mwi.westpoint.edu

    mwi.westpoint.edu. China has likewise made AI a centerpiece of its cognitive warfare strategy. Chinese doctrine explicitly refers to “cognitive domain operations”, combining AI with psychological and cyber warfare to achieve strategic effects​

    mwi.westpoint.edu. By “leveraging AI to create deepfakes, automate social media bots, and tailor disinformation to specific audiences,” China has “enhanced its capacity to manipulate public discourse” on a large scale​

    mwi.westpoint.edu. One observed outcome is the proliferation of highly realistic fake personas and videos that push pro-China narratives or sow confusion in target countries. These AI-enabled operations are not limited to wartime scenarios; they are continuously underway in the so-called gray zone, eroding adversaries’ societies from within.

    Beyond content generation, AI and big-data analytics empower cognitive warfare through micro-targeting and personalization. Algorithms can sift vast datasets (social media profiles, search histories, demographic information) to identify individuals’ beliefs, biases, and emotional triggers. This enables “precise targeting of individuals” with tailored influence – for instance, delivering customized propaganda or conspiracy theories to those most susceptible​

    moderndiplomacy.eu

    moderndiplomacy.eu. During the COVID-19 pandemic, we saw how automated social media manipulation could amplify anti-vaccine misinformation by targeting communities with specific fears. In military contexts, an AI might analyze soldiers’ social media to detect low morale units and then push demoralizing narratives or deepfake orders from their commanders. The integration of AI thus supercharges the classic techniques of propaganda and psyops, making them more adaptive, scalable, and insidious. As one commentator put it, “AI-driven information warfare weapons” are ushering in a “new revolution in military affairs,” with the potential to “manipulate a target’s mental functioning in a wide variety of manners” unless robust defenses are in place​

    smallwarsjournal.com

    smallwarsjournal.com.

    AI is not only an offensive tool in cognitive warfare – it is also a target and a battlefield. Modern societies increasingly rely on algorithmic decision-makers (from financial trading bots to military decision aids); these constitute “nonhuman cognition” that adversaries can attempt to deceive or corrupt​

    researchgate.net

    . For example, a rival might feed false data to an AI-based surveillance system so that it misidentifies threats (the equivalent of optical illusions for machines, known as adversarial examples). Or, in an information environment dominated by recommendation algorithms (on platforms like Facebook or YouTube), manipulating those algorithms’ inputs and parameters can effectively “hack” the attention and beliefs of millions. Indeed, a form of AI-on-AI cognitive warfare is conceivable, in which one side’s algorithms battle the other’s for control of the narrative – all faster than humans can follow. As AI “social bots” interact with AI recommendation systems, the information ecosystem can become an autonomous battleground of memetic and narrative contest, with humans as the prize. Researchers have warned that as “AI matures, it will magnify adversarial threat capabilities that maximize the creation of social chaos,” potentially eroding the trust that underpins democratic societies​

    smallwarsjournal.com.

    Defending against AI-enhanced cognitive warfare will likely require AI as well. Detection algorithms are being developed to spot deepfakes, bot networks, and coordinated disinformation campaigns in real time, flagging them before they spread widely. Machine learning can also help identify emerging “narrative attacks” by monitoring online discourse for sudden shifts, injected talking points, or inauthentic patterns​

    blackbird.ai

    blackbird.ai. Ultimately, a sort of autonomous cognitive security may be needed, where AI systems continuously patrol information channels for threats to the public’s mindset – analogous to anti-malware software but for disinformation. However, this raises hard questions (addressed later in this report) about surveillance, free expression, and who controls the filters on information. What is clear is that AI has become a double-edged sword in the cognitive domain: it vastly amplifies both the means of attack and the means of defense. The net impact on the balance of cognitive power between attackers and defenders remains to be seen, but the early indicators – from election interference to viral hoaxes – suggest that open societies face a significant new “cognitive security” challenge in the AI era.

    Operational Applications: Military and Hybrid Contexts

    Cognitive warfare strategies are being applied across a spectrum of scenarios, from battlefield operations to geopolitical influence campaigns that blur the line between war and peace. In traditional military settings, cognitive warfare techniques are used to undermine enemy morale, decision-making, and cohesion as a force multiplier alongside kinetic actions. For instance, militaries might deploy precision propaganda to convince enemy soldiers that their cause is futile or their leaders corrupt, prompting surrender or desertion. During the 2003 invasion of Iraq, U.S. psychological operations famously broadcast messages to Iraqi troops encouraging them to lay down arms; today, similar efforts could be enhanced with deepfake videos appearing to show Iraqi commanders already capitulating. Militaries are also integrating cognitive effects into operational planning – NATO’s Supreme Allied Commander Transformation has studied whether “the human brain is now the ultimate battlefield” and how commanders can incorporate cognitive objectives (like sowing confusion in enemy ranks) into campaign design​

    innovationhub-act.org

    innovationhub-act.org. Offensive cyber and electronic warfare units increasingly coordinate with information operations units: a cyberattack might take down communications (a physical effect), while simultaneously a flood of fake messages on enemy networks creates panic and false orders (a cognitive effect).

    Perhaps the clearest military application is seen in Russia’s and China’s doctrines, which explicitly embrace cognitive warfare. Russian hybrid warfare in Ukraine combined cyberattacks on infrastructure with relentless disinformation aimed at Ukrainian and international audiences – seeking both to fracture Ukraine’s will to fight and to influence global public opinion to reduce support for Ukraine​

    act.nato.int

    . Russia targeted Ukrainian soldiers with text messages telling them to surrender, and spread false narratives (e.g. staging incidents to accuse Ukraine of atrocities) to sway minds. China’s concept of “Three Warfares” (psychological warfare, public opinion warfare, and legal warfare) similarly emphasizes controlling the narrative and legal justification surrounding a conflict to “achieve victory” before shots are fired​

    moderndiplomacy.eu

    act.nato.int. In a Taiwan contingency, for example, China might launch cyber and cognitive operations months in advance: using social media sockpuppets to stir doubt about U.S. commitments among the Taiwanese populace, deploying deepfake videos of Taiwanese leaders to undermine their credibility, and flooding regional information channels with legal arguments claiming China’s right to act. The goal would be to isolate Taiwan psychologically and politically, “shaping the perceptions of reality” so that resistance seems hopeless​

    moderndiplomacy.eu. In all these cases, AI automation enables these campaigns to run continuously and adaptively, engaging millions of targets with tailored messages.

    Beyond overt conflict, cognitive warfare is now a fixture of gray-zone competition and hybrid threats in the civilian sphere. State and non-state actors use these techniques to achieve strategic aims without triggering a formal war, by attacking the cohesion, trust, and decision-making of societies. One illustrative scenario: an adversary spreads AI-generated rumors of an impending bank collapse in a rival nation, complete with forged “expert analyses” and fake news reports. Within weeks, this psychological operation could spark bank runs and financial turmoil – “undermining the public’s trust in institutions” and accomplishing economic damage that a bombing campaign might achieve, but clandestinely​

    moderndiplomacy.eu

    moderndiplomacy.eu. Real examples abound. Election meddling is a prominent one: from the 2016 US elections onward, foreign influence campaigns have used bots and false personas on social media to polarize electorates, promote extremist views, and erode trust in the electoral process​

    smallwarsjournal.com

    mwi.westpoint.edu. Disinformation in public health (such as the anti-vaccine movement) has been amplified by malicious actors to weaken adversary populations from within. Extremist groups like ISIS have also engaged in a form of cognitive warfare via online recruitment propaganda, using slickly produced videos and social media outreach to radicalize individuals globally. These “narrative attacks” by terrorists aim to inspire “lone wolf” attackers or build support networks – effectively weaponizing ideology through digital channels.

    A key characteristic of operational cognitive warfare is that it often targets civilian populations and social fault lines, exploiting existing divisions. Adversaries identify polarizing issues (race, religion, political identity) in the target society and then inject tailored disinformation to inflame tensions. The objective is to “accelerate pre-existing divisions…to pit different groups against each other and increase polarization”

    innovationhub-act.org

    . This was observed in the Russian Internet Research Agency’s operations, which in the same timeframe ran Facebook groups for both sides of contentious issues in the U.S., from police brutality to immigration, in order to exacerbate conflicts. In democratic nations, where free flow of information is a value, this openness is turned into a vulnerability – an “ungoverned Wild West” in cyberspace that authoritarian rivals exploit​

    smallwarsjournal.com. Democracies are particularly vulnerable to cognitive warfare because their very strength (open discourse) can be used against them to create chaos and doubt​

    smallwarsjournal.com

    smallwarsjournal.com. By contrast, authoritarian regimes insulate their populations (e.g. China’s Great Firewall), making them harder to influence externally​

    smallwarsjournal.com. This asymmetry has led NATO and Western officials to call for strengthening societal “cognitive security” and resilience as a matter of national security​

    smallwarsjournal.com

    smallwarsjournal.com. For example, Finland, which faces constant information attacks from Russia, has incorporated media literacy and critical thinking training into its school curricula, an approach credited with inoculating its citizens against propaganda.

    In sum, operational cognitive warfare spans a continuum: on one end, it is integrated with military operations to break the enemy’s will to fight and distort their decision loop (e.g. causing commanders to make mistakes based on false info). On the other end, it is a day-to-day strategic competition in the information environment – a constant “battle for hearts and minds” in which state and non-state actors attempt to steer the narratives and beliefs of target populations for strategic gain. As Claverie and du Cluzel put it, “Cognitive aggression is boundless. It can have a variety of objectives and will adapt itself to other strategies being used,” whether territorial conquest, influencing elections, or disrupting social order​

    innovationhub-act.org

    innovationhub-act.org. The next sections explore how emerging technologies and human-machine partnerships are augmenting these cognitive operations, and what ethical/policy issues arise as a result.

    Human–Machine Teaming and Neurotechnological Integration

    Because cognitive warfare ultimately centers on the human brain, an irony is that technology is both the weapon and the shield, but humans remain the most critical element. Human–machine teaming in cognitive warfare refers to the collaboration of human operators and AI/automation to enhance cognitive operations. This occurs in both offensive and defensive contexts. On offense, propagandists and psychological operators increasingly rely on AI systems to handle the “heavy lifting” of influence campaigns – data analysis, target selection, message personalization, and even automated content creation – while humans provide strategic guidance and ethical oversight. On defense, human analysts partner with AI tools to detect and counter adversary influence. For example, an intelligence analyst might use an AI platform to sift millions of social media posts for disinformation patterns, then use human judgment to craft counter-narratives or truth campaigns. Effective human–AI coordination can dramatically improve the speed and scale at which cognitive operations are conducted or countered. As one NATO report notes, “CogWar takes well-known methods within warfare to a new level by attempting to alter and shape the way humans think, react, and make decisions.”

    researchgate.net

    To manage this new level, human operators must leverage AI’s data-handling capabilities without losing the uniquely human insight into psychology and culture.

    A holistic approach to human–machine teaming is necessary, as highlighted by Flemisch (2023). He argues that cognitive warfare should be seen as a socio-technical system in which human cognition and machine cognition interact dynamically​

    sto.nato.int

    . Flemisch introduced a “holistic bowtie model” of cognitive warfare that maps out how technological agents (AI, algorithms) and human agents (individuals, decision-makers) are interwoven in both attacking and defending cognitive targets​

    researchgate.net. In this model, technology is not just a tool but an active participant in the cognitive battle – for instance, algorithmic content amplifiers on social media can be thought of as force multipliers on the battlefield of perception. The bowtie metaphor (often used in risk management) implies a structure where threats on one side and consequences on the other are linked by a central event or process; applied here, it suggests that by strengthening human–machine interfaces and trust, one can narrow the “choke point” through which cognitive attacks must pass, thus mitigating their impact. In practice, this could mean designing information systems such that humans remain in meaningful control – e.g. AI flags a suspected disinformation post, but a human moderator decides to remove or label it. If done right, human–machine teams can capitalize on AI’s speed and breadth and human intuition and ethical judgment. If done poorly, there’s a risk of automation bias (over-reliance on AI outputs) or conversely, information overload for human operators.

    One emerging area of human–machine teaming is the use of brain–computer interfaces (BCIs) and other neurotechnologies to integrate humans and machines more directly. Advances funded by DARPA and others aim to enable “seamless neural links” whereby soldiers and AI systems could exchange information by direct neural signals​

    rand.org

    rand.org. The goal is to accelerate the Observe–Orient–Decide–Act (OODA) loop in warfare by bypassing slower channels like verbal commands or screen displays​

    rand.org

    rand.org. For example, an intelligence analyst wearing a noninvasive BCI might receive an AI’s threat alert as a mental sensation or visual overlay straight into the brain, shortening reaction time. DARPA’s N3 program (Next-Generation Nonsurgical Neurotechnology) explicitly cites the potential of BCIs to “facilitate multitasking at the speed of thought” and to “interface with smart decision aids” in combat​

    rand.org

    rand.org. In essence, the human brain could be gradually augmented by AI – not just with traditional decision support, but with real-time neural input/output. This promises significant advantages in cognitive warfare: a soldier could be resistant to information overload because the AI filters and feeds only what’s crucial directly to cognition​

    rand.org

    rand.org. It might also allow multiple robotic systems to be controlled simultaneously by one person’s thoughts, as experiments in “swarm control via neural signals” suggest​

    rand.org

    rand.org. However, this deep integration also creates novel vulnerabilities – a “hacked” BCI could literally inject thoughts or alter perception in the user, which is a security nightmare scenario.

    Beyond BCIs, neurotechnology integration includes things like neurostimulators, wearables monitoring stress or attention, and neurochemical enhancements. Militaries are investing in understanding and boosting the cognitive performance of their personnel: e.g. wearable EEG devices to continuously assess a pilot’s cognitive workload, or transcranial electrical stimulation to keep special forces alert for longer. China reportedly developed an “Intelligent Psychological Monitoring System” – sensor bracelets that track soldiers’ emotional states and fatigue, alerting commanders if combat troops are losing morale​

    act.nato.int

    act.nato.int. This kind of tech can be double-edged: it helps maintain one’s own forces’ cognitive readiness, but in the hands of an adversary it could be used to identify when enemy forces are psychologically vulnerable (or even to manipulate them if they somehow intercept or hijack such data). The weaponization of neuroscience is an area of increasing concern. DiEuliis and Giordano (2017) argue that gene-editing tools like CRISPR could be “game-changers for neuroweapons” – for instance, by engineering viruses or toxins that selectively attack neural functions​

    pmc.ncbi.nlm.nih.gov. They posit a “novel—and realizable—path to creating potential neuroweapons” via genetically modified neurotoxins or psychoactive agents that could impair cognition or induce psychological states in targets​

    pmc.ncbi.nlm.nih.gov

    pmc.ncbi.nlm.nih.gov. Such neuroweapons blur the boundary between biological and cognitive warfare: a pathogen that induces paranoia or lethargy in a population would effectively serve a cognitive warfare goal through biological means.

    The integration of neurotechnology also extends to cognitive enhancement of friendly forces. Modafinil and other nootropics have been used to keep fighter pilots and troops mentally sharp on long missions. Future enhancements could include memory boosters or stress inoculators that improve decision-making under pressure. In the UnCODE framework, this would be seen as the “Enhance” category – potentially enhancing a target’s cognition not to help them, but to steer them (for example, boosting a local leader’s cognitive capacity with better information so that they become overconfident and take bold actions beneficial to the adversary’s plan)​

    researchgate.net

    . While that example is speculative, it highlights that manipulating cognition can involve adding as well as subtracting capabilities.

    Finally, some of the more speculative (and controversial) ideas in human-machine cognitive teaming come from the fringes of defense research. “Remote neural influencing” via directed energy (e.g. high-power microwaves affecting brain activity) has been explored as a way to disrupt an enemy’s cognitive functions at a distance​

    trinedaythejourneypodcast.buzzsprout.com

    . Reports of so-called “microwave weapons” causing disorientation (as in the Havana embassy incidents) have raised questions about whether states are already employing such methods to literally impair brain function and induce cognitive confusion. “Reflexive control”, a concept from Soviet-era doctrine, is essentially cognitive warfare by causing an adversary to make decisions against their own interest by shaping their perceptions – today AI could aid in executing reflexive control by micro-targeting decision-makers with exactly the stimuli that will evoke the desired (but harmful) response​

    trinedaythejourneypodcast.buzzsprout.com. Author M. McCarron (2024) suggests that advanced military powers are even investigating “thought injection” – attempts to insert ideas or impulses into a person’s mind, potentially via subliminal cues or neural interfaces, with AI systems orchestrating these efforts in a tailored way​

    trinedaythejourneypodcast.buzzsprout.com. While some of these claims verge on science fiction or conspiracy, they underscore a core point: as technology penetrates deeper into the human cognitive domain, the distinction between influencing a mind and physically assaulting it begins to blur. This raises profound ethical and legal challenges, which we turn to next.

    Ethical, Legal, and Policy Implications

    The rise of cognitive warfare – especially turbocharged by AI and neurotechnology – presents a host of ethical and legal dilemmas. Traditional laws of war (e.g. the Geneva Conventions) and norms of conflict were not designed with “attacks on the mind” in mind. One fundamental issue is the targeting of civilians. Cognitive warfare campaigns almost invariably target civilian populations (either broadly or in specific segments) because altering the adversary’s society and political environment is often the objective. This clashes with the principle of distinction in international humanitarian law, which prohibits direct attacks on civilians. Propaganda and psychological operations have long been a gray area; they are generally legal in peacetime and wartime up to a point, but where is the line between permissible information influence and an illegal attack causing harm? For example, deliberately spreading disinformation that causes panic (as in the banking panic scenario) could be viewed as an attack on civilian well-being. Yet, because no kinetic force is used, it falls into a legal void. There is an argument that cognitive attacks that inflict significant suffering or harm (e.g. inciting violence, or inducing mental illness or self-harm on a population) might violate the spirit of the laws of war or human rights law. However, enforcement is exceedingly difficult – proving causation and intent in the psychological realm is much harder than for a dropped bomb.

    Another ethical concern is the manipulation of truth and free will. Cognitive warfare by nature involves deception, propaganda, and psychological manipulation. Democracies face a moral quandary: to defend themselves, do they adopt similar tactics (fighting fire with fire in the info sphere) at the cost of eroding the very values of truth and transparency they uphold? For instance, should a democratic government ever use deepfakes for benign psyops against extremist groups? Most liberal societies would currently say no, as it violates norms of honesty and could backfire by undermining public trust if revealed. Moreover, if AI systems start engaging in “automated subversion” – acting without direct human orders to spread disinformation – accountability becomes murky. Can a nation be held responsible for an autonomous AI agent that runs amok in the information environment? These questions echo the broader AI ethics debate, now applied to warfare: ensuring meaningful human control, responsibility, and compliance with intent. As of now, “there are no established ethical considerations and doctrines” fully governing cognitive warfare​

    innovationhub-act.org

    innovationhub-act.org. NATO researchers point out that this field expanded so rapidly with digital tech that policy has lagged behind​

    innovationhub-act.org

    innovationhub-act.org. The ethical framework is essentially playing catch-up to real-world tactics already in use.

    The use of neurotechnological and biomedical tools for cognitive purposes raises additional legal questions. Would deploying a CRISPR-engineered “emotion virus” that makes people apathetic violate the Biological Weapons Convention? Quite possibly – it likely qualifies as a biological agent – but what if it’s an incapacitant that only affects cognition (akin to a psychotropic drug weapon)? The Chemical Weapons Convention does ban chemical incapacitating agents, and arguably a bioweapon causing cognitive damage would be covered. However, what about non-chemical means like directed energy that cause no visible injury but induce, say, temporary memory loss or panic? There is no explicit treaty on directed-energy neuroweapons. Scholars like Giordano call for updating arms control categorizations to account for such novel neuroweapons and to establish oversight on dual-use neuroscience research​

    pmc.ncbi.nlm.nih.gov

    pmc.ncbi.nlm.nih.gov. Already, some nations have included gene editing in WMD threat lists due to its potential misuse for such purposes​

    pmc.ncbi.nlm.nih.gov.

    Privacy and human rights are also at stake. Cognitive warfare techniques often involve mass data collection about target populations (to personalize messages) and surveillance of online behavior. This can conflict with privacy rights. Moreover, if states ramp up their cognitive security, they may implement more aggressive monitoring of their own information space – straying into censorship. Free expression becomes a casualty if every contentious view is seen as a possible foreign influence to be quashed. Democracies must balance resilience to manipulation with preservation of open discourse. On the defensive side, one policy debate is how to educate and inoculate the public against cognitive manipulation. Programs in digital literacy, critical thinking, and even “mind fitness” (being aware of cognitive biases) are being considered. Ethically, such programs are positive if done transparently, but one could imagine governments attempting to “immunize” the public by quietly feeding them counter-propaganda – which starts to resemble the manipulations we seek to fight.

    Internationally, there is no clear consensus or legal regime specifically for cognitive warfare. Acts like election interference via disinformation arguably violate principles of non-intervention in sovereign affairs, but attribution and response are diplomatically fraught. Some experts suggest new agreements or norms are needed – for instance, perhaps states could agree not to target each other’s health sector with disinformation (given the COVID experience), or not to use deepfakes of each other’s leaders (to avoid inadvertent escalation). However, enforcement of such norms would be challenging. It is also worth noting that cognitive warfare can prevent or reduce violence in some cases – for example, if done to “deter intervention” or “win without fighting” by convincing an adversary’s population to oppose war​

    moderndiplomacy.eu

    . This resonates with Sun Tzu’s ideal of winning through influencing enemy will rather than slaughter. Ethically, one might argue that if cognitive means can achieve outcomes with less bloodshed, they could be seen as a more humane form of conflict (provided they don’t cross into atrocity like inciting genocide, which is clearly illegal). The counter-argument is that manipulating minds can be deeply injurious to personal autonomy and societal harmony, thus a different but still serious harm.

    On the policy front, NATO and allied countries are now actively grappling with cognitive warfare. NATO has stood up initiatives like the Cognitive Warfare Exploratory Concept and held high-level symposia on “Mitigating and Responding to Cognitive Warfare”

    researchgate.net

    . The alliance recognizes that it must develop defensive measures, which include public awareness campaigns, strengthening of democratic institutions against subversion, training military personnel in cognitive security, and improving intelligence-sharing on influence operations. For example, a NATO technical report recommends developing a comprehensive training curriculum to “increase awareness regarding the impact of psychological deception” and to inculcate resilience at all levels of military and political leadership

    publications.tno.nl

    publications.tno.nl. Indeed, studies found even information operations specialists were not significantly more savvy about manipulation than average civilians​

    publications.tno.nl

    publications.tno.nl – highlighting a need for education. Governments are also working with social media companies and AI firms to detect and disrupt foreign disinformation networks (though this raises questions about state influence over tech platforms).

    In terms of doctrine, armed forces are updating their concepts to incorporate the cognitive dimension. The U.S. DoD, for instance, in 2020 included “information” as the seventh joint function (alongside maneuver, firepower, etc.), reflecting the recognition that controlling information and perception is part of modern operations. Some have suggested establishing dedicated “cognitive warfare units” or commands that unify cyber, psyops, electronic warfare, and intelligence to fight on this front as a coordinated whole. Legal advisors are being forced to consider at what point does a sustained disinformation campaign constitute an “armed attack” or use of force under international law, potentially warranting self-defense. These are uncharted waters: if, hypothetically, an AI-driven cognitive attack caused massive civil unrest in a country, could that country lawfully respond with conventional force against the perpetrator? The answer is unclear, demonstrating the gap between strategic reality and legal frameworks.

    In conclusion, the advent of AI-empowered cognitive warfare is challenging existing ethical norms and legal regimes. It urges a rethinking of security to include cognitive security as equally important as territorial security. Policymakers will need to delineate red lines (e.g. outlawing certain deceptive tactics, at least by policy if not by treaty) and invest in societal resilience. Ethically, democratic societies must strive to defend against cognitive attacks without undermining the open values that make them worth defending in the first place – a delicate balance requiring transparency, public engagement, and education rather than purely clandestine counter-operations. The next decade will likely see rapid evolution in both the methods of cognitive warfare and the frameworks to control it.

    Conclusion

    Cognitive warfare represents a paradigm shift in conflict, one that “uses the human mind as its battlefield” and leverages any and all means – informational, technical, psychological, and biological – to achieve its aims​

    blackbird.ai

    . Automation and AI have amplified this mode of warfare, enabling influence operations of unprecedented scale and sophistication. The integration of humans and machines, from AI-run bot armies to potential brain-computer interfaces, is blurring the line between the organic and the technological in warfare. As we have seen, military planners are increasingly incorporating cognitive effects into their operations, and adversaries are already exploiting the seams of open societies with automated disinformation and strategic propaganda.

    Confronting these threats will require an equally innovative and integrated response. Frameworks like UnCODE and the cognitive kill chain help in conceptualizing and anticipating attack paths, while human-machine teaming and AI tools will be indispensable in mounting a defense. However, technology is not a silver bullet – indeed, an over-reliance on AI without human context can be perilous in a domain as nuanced as human cognition. Thus, investing in human capital – training leaders, soldiers, and citizens to recognize and withstand cognitive manipulation – is paramount. Building a culture of resilience and media literacy is as crucial as deploying the latest algorithm to filter fake news. On the international stage, norms and perhaps new laws must evolve to govern cognitive conflict, lest it becomes a wild frontier where anything goes.

    In a sense, cognitive warfare is a return to an ancient truth: war has always been, at its heart, a battle of wills and minds, as much as a clash of arms. What has changed is the medium and reach – digital networks and AI have made it possible to assault the mind with precision and scale never before seen. The challenge for the global community is to navigate this landscape responsibly: to harness the positive potential of AI and neuroscience for strengthening human decision-making and security, while guarding against their malicious use. As cognitive warfare “operates on a global stage” and is “essentially invisible” until its effects are felt​

    innovationhub-act.org

    innovationhub-act.org, proactive and preemptive measures will be vital. In summary, cognitive warfare demands a holistic approach to defense – one that marries cutting-edge technology with deep understanding of human nature, upholds ethical standards, and reinforces the very cognitive strengths (critical thinking, informed citizenry, unity of purpose) that adversaries seek to erode​

    innovationhub-act.org

    innovationhub-act.org.

    Sources:

    Ask, T. F., et al. (2024). The UnCODE System: A Neurocentric Systems Approach for Classifying the Goals and Methods of Cognitive Warfare

    researchgate.net

    researchgate.net. NATO HFM-361 Symposium.

    Ask, T. F., & Knox, B. J. (2023). Cognitive Warfare and the Human Domain: … Human Domain

    sto.nato.int

    . In Mitigating and Responding to Cognitive Warfare (NATO STO-TR-HFM-ET-356).

    Claverie, B., & du Cluzel, F. (2022). “Cognitive Warfare”: The Advent of the Concept of “Cognitics” in Warfare. In Cognitive Warfare: The Future of Cognitive Dominance, 2:1–8​

    innovationhub-act.org

    innovationhub-act.org.

    du Cluzel, F. (2020). Cognitive Warfare (NATO Innovation Hub)​

    innovationhub-act.org

    innovationhub-act.org.

    Flemisch, F. (2023). Human-Machine Teaming Towards a Holistic Understanding of Cognitive Warfare. In Mitigating and Responding to Cognitive Warfare (NATO STO-TR-HFM-ET-356)​

    researchgate.net

    .

    Hutchins, E., Cloppert, M., & Amin, R. (2010). Intelligence-Driven CND Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains

    lockheedmartin.com

    .

    Le Guyader, H. (2022). Cognitive Domain: A Sixth Domain of Operations? In Cognitive Warfare: Future of Cognitive Dominance, 3:1–6​

    innovationhub-act.org

    .

    Masakowski, Y. R., & Blatny, J. M. (Eds.) (2023). Mitigating and Responding to Cognitive Warfare (NATO STO-TR-HFM-ET-356).

    DiEuliis, D., & Giordano, J. (2017). Why Gene Editors Like CRISPR/Cas May Be a Game-Changer for Neuroweapons

    pmc.ncbi.nlm.nih.gov

    pmc.ncbi.nlm.nih.gov. Health Security, 15(3).

    McCarron, M. (2024). Battlespace of Mind: AI, Cybernetics and Information Warfare

    trinedaythejourneypodcast.buzzsprout.com

    .

    Additionally cited: NATO ACT (2022) Cognitive Warfare Concept

    act.nato.int

    act.nato.int; Bond, S. (2023) NPR report on fake Pentagon image​

    npr.org

    npr.org; Coombs, A. (2024) MWI report on AI in info ops​

    mwi.westpoint.edu

    mwi.westpoint.edu; etc.

  • Mean Reversion and Stationarity in Statistics

    In statisticsregression toward the mean (also called regression to the meanreversion to the mean, and reversion to mediocrity) is the phenomenon where if one sample of a random variable is extreme, the next sampling of the same random variable is likely to be closer to its mean.[2][3][4] Furthermore, when many random variables are sampled and the most extreme results are intentionally picked out, it refers to the fact that (in many cases) a second sampling of these picked-out variables will result in “less extreme” results, closer to the initial mean of all of the variables.

    Mathematically, the strength of this “regression” effect is dependent on whether or not all of the random variables are drawn from the same distribution, or if there are genuine differences in the underlying distributions for each random variable. In the first case, the “regression” effect is statistically likely to occur, but in the second case, it may occur less strongly or not at all.

    Regression toward the mean is thus a useful concept to consider when designing any scientific experiment, data analysis, or test, which intentionally selects the most extreme events – it indicates that follow-up checks may be useful in order to avoid jumping to false conclusions about these events; they may be genuine extreme events, a completely meaningless selection due to statistical noise, or a mix of the two cases.

    source: https://en.wikipedia.org/wiki/Regression_toward_the_mean

    How to test for mean reversion:

    Mathematically, a continuous mean-reverting time series can be represented by an Ornstein-Uhlenbeck stochastic differential equation in the following form:

    Where θ is the rate of reversion to the mean, μ is the mean value of the process, σ is the variance of the process and, finally, Wt is a Wiener process.
    The given equation implies that the change of the time series in the next period is proportional to the difference between the mean and the current value, with the addition of Gaussian noise.

    source: https://medium.com/bluekiri/simple-stationarity-tests-on-time-series-ad227e2e6d48

    We can see mean reversion as the line of linear regression as in this plot:

    A key concept in testing for mean reversion is that of stationarity:

    In mathematics and statistics, a stationary process (also called a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose statistical properties, such as mean and variance, do not change over time. More formally, the joint probability distribution of the process remains the same when shifted in time. This implies that the process is statistically consistent across different time periods. Because many statistical procedures in time series analysis assume stationarity, non-stationary data are frequently transformed to achieve stationarity before analysis.

    source: https://en.wikipedia.org/wiki/Stationary_process

    (source: https://www.youtube.com/watch?v=I3NjeRXIs5k, accessed 10/3/25)

    The Augmented Dickey-Fuller test provides a quick check and confirmatory evidence that your time series is stationary or non-stationary. The ADF test is based on the simple observation that if the value level is higher than the mean, the next move will be downward while if the value is lower than the mean, the next move will be upward.

    In the python code below we will simply interpret the result using the p-value from the test. A p-value below a specified threshold (we are going to use 5%) suggests we reject the null hypothesis (stationary), otherwise a p-value above the threshold suggests we accept the null hypothesis (non-stationary).

    import numpy as np
    from statsmodels.regression.linear_model import OLS
    from statsmodels.tsa.tsatools import lagmat, add_trend
    from statsmodels.tsa.adfvalues import mackinnonp
    
    def adf(ts):
        """
        Augmented Dickey-Fuller unit root test
        """
        # make sure we are working with an array, convert if necessary
        ts = np.asarray(ts)
        
        # Get the dimension of the array
        nobs = ts.shape[0]
        
        # We use 1 as maximum lag in our calculations
        maxlag = 1
        
        # Calculate the discrete difference
        tsdiff = np.diff(ts)
        
        # Create a 2d array of lags, trim invalid observations on both sides
        tsdall = lagmat(tsdiff[:, None], maxlag, trim='both', original='in')
        # Get dimension of the array
        nobs = tsdall.shape[0] 
        
        # replace 0 xdiff with level of x
        tsdall[:, 0] = ts[-nobs - 1:-1]  
        tsdshort = tsdiff[-nobs:]
        
        # Calculate the linear regression using an ordinary least squares model    
        results = OLS(tsdshort, add_trend(tsdall[:, :maxlag + 1], 'c')).fit()
        adfstat = results.tvalues[0]
        
        # Get approx p-value from a precomputed table (from stattools)
        pvalue = mackinnonp(adfstat, 'c', N=1)
        return pvalue

    source: https://medium.com/bluekiri/simple-stationarity-tests-on-time-series-ad227e2e6d48

    this code can also bevalidated by referencing the function adfuller, included in the Python module statsmodels

    One can also test the stationarity by using the Hurst test. This measures the speed of diffusion in mean reversion, which should be slower then in a geometric random walk. The speed of diffusion is measured by it’s variance.

    In code we can test for the Hurst exponent in the following code from Corrius (2018):

    def hurst(ts):
        """
        Returns the Hurst Exponent of the time series vector ts
        """
        # make sure we are working with an array, convert if necessary
        ts = np.asarray(ts)
    
        # Helper variables used during calculations
        lagvec = []
        tau = []
        # Create the range of lag values
        lags = range(2, 100)
    
        #  Step through the different lags
        for lag in lags:
            #  produce value difference with lag
            pdiff = np.subtract(ts[lag:],ts[:-lag])
            #  Write the different lags into a vector
            lagvec.append(lag)
            #  Calculate the variance of the difference vector
            tau.append(np.sqrt(np.std(pdiff)))
    
        #  linear fit to double-log graph
        m = np.polyfit(np.log10(np.asarray(lagvec)),
                       np.log10(np.asarray(tau).clip(min=0.0000000001)),
                       1)
        # return the calculated hurst exponent
        return m[0]*2.0

    source: https://medium.com/bluekiri/simple-stationarity-tests-on-time-series-ad227e2e6d48

    We interpet the results by the following rules:

    H=0.5, is a geometric random walk; for a mean reverting series, H<0.5, and, finally, for a trending series H>0.5. H also is an indicator for the degree of mean reversion or trendiness: as H decreases towards 0, the series is more mean reverting and as it increases towards 1, it is more trending.

    To make sure it is not a random walk we can test the statistical significance of the H value with the Variance Ratio Test:

    import numpy as np
    
    def variance_ratio(ts, lag = 2):
        """
        Returns the variance ratio test result
        """
        # make sure we are working with an array, convert if necessary
        ts = np.asarray(ts)
        
        # Apply the formula to calculate the test
        n = len(ts)
        mu  = sum(ts[1:n]-ts[:n-1])/n;
        m=(n-lag+1)*(1-lag/n);
        b=sum(np.square(ts[1:n]-ts[:n-1]-mu))/(n-1)
        t=sum(np.square(ts[lag:n]-ts[:n-lag]-lag*mu))/m
        return t/(lag*b);
    
    #Source: Corrius (2018)

    The test involves dividing the variance of group one by the variance of group two. If this ratio is close to one the conclusion drawn is that the variance of each group is the same. If the ratio is far from one the conclusion drawn is that the variances are not the same.

    So how long will it take for the time series to mean revert, to diffuse back to the mean? This is seen in measuring the ‘half-life’ of the mean reversion.

    import numpy as np
    
    def half_life(ts):  
        """ 
        Calculates the half life of a mean reversion
        """
        # make sure we are working with an array, convert if necessary
        ts = np.asarray(ts)
        
        # delta = p(t) - p(t-1)
        delta_ts = np.diff(ts)
        
        # calculate the vector of lagged values. lag = 1
        lag_ts = np.vstack([ts[1:], np.ones(len(ts[1:]))]).T
       
        # calculate the slope of the deltas vs the lagged values 
        beta = np.linalg.lstsq(lag_ts, delta_ts)
        
        # compute and return half life
        return (np.log(2) / beta[0])[0]
    
    #source: Corrius (2018)

    So we can see that we can understand mean regression in programming, namely for fintech, through the following steps:

    1. Test for stationarity using the Augmented Dickey Fuller test (ADF Test)
    2. Confirm by testing the Hurst Exponent (H)
    3. Test for the variance ratio, F-ratio test
    4. test for the time to mean revert using the half life test

  • Comparative Z-Scores for Stock Prices in Varying Sample Sizes

    A quick study of z-scores in varying samples of NVDA stock prices.

    I used python to generate these plots.

    animation of NVDA zscore scatter plot x , y are in standard deviation units, anything above 3 or below -3 is considered outliers in data science. DeepSeek release was nearly an outlier effect on NVDA value. Image shows last 15 days, 30 days, 45 days, 60 days and 93 days.

    Z-Scores for QQQ ETF Tech Index stock, for same time periods above for NVDA. In an index stock such as QQQ one should see a smoother spectrum as it is less susceptible to volatility.

    First, we see the last 93 trade days price chart. Then afterwards, we take a look at the 93, 60, 45, 30, 15 sample windows all going backward in time from Feb. 20th, 2025. One major petrubative wave that hit the stock was the release of DeepSeek which had a negative impact temporarily on NVDA value. The question is whether one can see a correlation to the action of the Index ETF for the sector NVDA is in and is a part of the portfolio. Examining the spread in the z-scores to see if one can tell if it is an indicator of up or down motion in relation to the index for NVDA stock prices.

    Comparing different plots of Z-scores for the price of stocks for NVDA form Oct 4th, 2024 to February 20, 2025.

    NVDA past 93 trading days. open price 124.92, close price 140.11 (gained 13%)

    QQQ ETF, of which NVDA is a member of the portfolio, past 93 trading days, open price: 487.32 close price: 537.23 (gained 10.2%)

    Z-Scores:

    z-score is calcualted as:

    93 Days:

    NVDA past 93 trading days zscores

    QQQ past 93 trading days

    I use the terms "prices" as code for NVDA and "trends" for QQQ, the index ETF for tech stocks. 
    
    Some Data for Z-scores:
    
    Shape of Z-score plots: 
    prices max/min, trends max/min:  1.7594241527219399 -2.9115388106292883 1.7722687913449942 -2.0281115555314564
    
    
    NVDA length and mean of positive and negative:
    prices positive len:  51 0.7123723838425315
    prices negative len:  43 -0.8449067808364951
    
    prices positive list:  [0.14019500216746938, 0.13021735775384247, 0.9441080663512041, 0.9270035330706985, 0.35257629040040717, 0.4737334011373164, 0.6348010895287374, 0.4894125566444479, 0.5934651341008516, 0.3205052904994612, 0.40246451246854575, 1.2149298432925333, 1.6810283751862907, 1.502856153514364, 1.1650416212243906, 1.5969310865571407, 1.30900477633531, 1.3788482872307024, 0.6975177115572552, 0.43667357902955695, 1.4144827315650879, 1.254840420947041, 1.3660198872703233, 0.6932415782371287, 0.16585180208822778, 0.2200161574764928, 0.4523527345366844, 1.147937087943885, 1.136534065756884, 0.7630850891325253, 0.2456729573972512, 0.3169418460660218, 0.03614242471106574, 0.36825544590753456, 0.44665122344318386, 0.40531526801529805, 0.057523091311697735, 0.17440406872848058, 1.052436777127734, 1.7594241527219399, 0.4352482012561788, 0.43097206793605647, 0.08888140232595665, 0.5335992676190859, 1.4230349982053405, 1.4444156648059727, 0.7887418890532837, 0.25137446849075173, 0.32977024602640104, 0.30553882387901676, 0.43097206793605647]
    
    prices negative list:  [-1.6826224330881445, -2.0281115555314564, -1.5515748349199896, -1.2967600607041379, -1.3338240278628077, -1.2828610730196344, -1.0088524586680414, -1.4489870686772424, -1.4450159293388134, -1.4225128064210513, -1.2093949952586989, -1.146518622400242, -1.1107783683543844, -1.6137893512220423, -1.3516941548857364, -1.1531371879642918, -1.1478423355130543, -0.8327986146643587, -1.082980392985381, -1.9122866581606137, -1.6753420109676906, -1.7693256419771755, -1.3589745770061903, -0.47407236109295114, -0.032614037970937836, -0.26823497205105246, -1.0704051184136898, -0.8420646064540289, -0.6143859510507724, -0.6335797911865096, -0.5137837544772378, -0.4601733734084476, -0.4072248488960653, -0.22719986555395358, -0.49260434467228414, -0.19874003362854603, -0.10012340672422786, -0.16630906236470946, -0.36751345551177483, -0.4753960742057596, -0.507165188913192]
    
    
    Trends (QQQ) length and mean for positive/negative:
    trends positive len:  53 0.7493882152106839
    trends negative len:  41 -0.9687213513699346
    
    trends positive list:  [0.05276545780528101, 0.09247685118957147, 0.07262115449743001, 0.011068494751777909, 0.1685903551761238, 0.2731636910880874, 0.6960900306307639, 0.6001208299520626, 0.9092078417931164, 0.6378466536671404, 0.520036186627085, 1.1368864971963693, 0.9105315549059249, 1.1772597471370638, 1.6829181562303424, 1.5260581523624006, 0.2466894288318963, 0.09446242085878412, 0.39163601468454706, 0.670277624930977, 1.1395339234219937, 1.1157070873914194, 0.6497600716824238, 0.18976976498108122, 0.3863411622333134, 0.7808076698505786, 0.16130993305566993, 0.16726664206331537, 0.2619121296292083, 0.022320056210664595, 0.595487834057233, 0.7980159403171032, 1.2421216896647371, 1.316911480538481, 1.117692657060632, 0.0971098470844085, 0.6001208299520626, 0.535258887424397, 0.6821910429462604, 0.631889944659495, 0.3552339040822852, 0.776174673955749, 0.9336965343800949, 1.1157070873914194, 0.6735869077129981, 1.0925421079172493, 1.009148181810243, 1.0296657350587888, 1.5326767179264504, 1.681594443117534, 1.7623409429989234, 1.7722687913449942, 1.6207036399282937]
    
    trends negative list:  [-1.6826224330881445, -2.0281115555314564, -1.5515748349199896, -1.2967600607041379, -1.3338240278628077, -1.2828610730196344, -1.0088524586680414, -1.4489870686772424, -1.4450159293388134, -1.4225128064210513, -1.2093949952586989, -1.146518622400242, -1.1107783683543844, -1.6137893512220423, -1.3516941548857364, -1.1531371879642918, -1.1478423355130543, -0.8327986146643587, -1.082980392985381, -1.9122866581606137, -1.6753420109676906, -1.7693256419771755, -1.3589745770061903, -0.47407236109295114, -0.032614037970937836, -0.26823497205105246, -1.0704051184136898, -0.8420646064540289, -0.6143859510507724, -0.6335797911865096, -0.5137837544772378, -0.4601733734084476, -0.4072248488960653, -0.22719986555395358, -0.49260434467228414, -0.19874003362854603, -0.10012340672422786, -0.16630906236470946, -0.36751345551177483, -0.4753960742057596, -0.507165188913192]
    
    zscore silos: 
    prices 0 to 1:  35 0.4056562939002893
    prices 1 to 2:  16 1.3722767337624457
    prices 2 to 3:  0 nan
    prices 3>:  0 nan
    prices 0 to -1:  31 -0.4417795985574785
    prices -1 to -2:  8 -1.4510292935080256
    prices -2 to 3:  4 -2.7127490308408726
    prices <3:  0 nan
    
    trends 0 to 1:  35 -0.27555447552077394
    trends 1 to 2:  17 1.344595325715943
    trends 2 to 3:  0 nan
    trends 3>:  0 nan
    trends 0 to -1:  20 -0.41098618215387406
    trends -1 to -2:  21 -1.3951694395818528
    trends -2 to 3:  1 -2.0790028405616057
    trends <3:  0 nan

    60 DAYS

    NVDA past 60 days open: 146.67, close: 140.11

    QQQ past 60 days open: 504.98, close: 537.23

    "prices" is for NVDA, "trends" is for QQQ
    
    some data for 60 day plots:
    
    
    
    shape of plots: 
    prices max/min, trends max/min:  1.9145807927766163 -2.645347442651133 1.8695586413001115 -1.7542554651846929
    
    
    
    prices positive len:  34 0.6792199553321576
    prices negative len:  26 -0.8882107108189714
    
    prices positive list:  [1.5305282468571573, 0.8737427335456234, 0.04858635771143208, 0.1738208835547299, 0.35888968285649864, 0.4117664826570029, 0.6385801239065363, 1.3176295529235456, 1.3064975950708098, 0.9419259753936441, 0.43681338782566403, 0.5063881244052757, 0.23226366228160591, 0.556481934742594, 0.6330141449801685, 0.5926607977639948, 0.18634433613906048, 0.2531360832554902, 0.36723865124605237, 1.2243994059068675, 1.9145807927766163, 0.6218821871274288, 0.617707702932656, 0.03327991566391562, 0.07919924180646105, 0.2837489673505192, 0.7178953236072966, 1.5861880361208474, 1.6070604570947318, 0.9669728805623052, 0.4423793667520319, 0.5189115769896063, 0.49525616655253607, 0.617707702932656]
    
    prices negative list:  [-1.7542554651846929, -1.669273142479056, -1.5853399842512712, -1.2999672462767844, -1.720682201893578, -1.2548531737293467, -0.6725668885240624, -0.506798901024174, -0.11546055078710396, -0.5487654801380664, -0.7900733100429681, -0.318998459489499, -0.6389936252329415, -1.098527666530088, -1.203444114314825, -0.32739177531227037, -0.6841076977803849, -0.6746652174797553, -1.5223901155804267, -1.6934039254695472, -1.7437638204062227, -0.524634697147575, -0.9044322381283226, -0.7858766521315705, -0.0913297677966126, -0.3767025057710995]
    
    trends positive len:  34 0.7207852536147648
    trends negative len:  26 -0.9425653316500864
    
    trends positive list:  [0.1636172003202925, 0.01148835103241942, 0.5014481621871478, 0.07129072626972463, 0.8623607425666361, 0.5035464911428407, 0.926359775715327, 1.7279214367907145, 1.4792694555408878, 0.12269978568424661, 0.8665574004780336, 0.828787479275527, 0.09017568687097201, 0.29791025348475275, 0.004144199687494526, 0.32518852990878333, 1.0291778945443708, 1.147733480541123, 0.8319349727090662, 0.01148835103241942, 0.14158474628549397, 0.06184824596909498, 0.2905661021398278, 0.5402672478675009, 0.828787479275527, 0.12794560807347868, 0.7920667225508667, 0.6598719983420994, 0.6923960971553621, 1.4897611003193638, 1.7258231078350217, 1.8538211741324033, 1.8695586413001115, 1.6292999758730682]
    
    trends negative list:  [-1.7542554651846929, -1.669273142479056, -1.5853399842512712, -1.2999672462767844, -1.720682201893578, -1.2548531737293467, -0.6725668885240624, -0.506798901024174, -0.11546055078710396, -0.5487654801380664, -0.7900733100429681, -0.318998459489499, -0.6389936252329415, -1.098527666530088, -1.203444114314825, -0.32739177531227037, -0.6841076977803849, -0.6746652174797553, -1.5223901155804267, -1.6934039254695472, -1.7437638204062227, -0.524634697147575, -0.9044322381283226, -0.7858766521315705, -0.0913297677966126, -0.3767025057710995]
    
    zscore silos: 
    prices 0 to 1:  26 0.4857168466015017
    prices 1 to 2:  7 1.4753238481642317
    prices 2 to 3:  0 nan
    prices 3>:  0 nan
    prices 0 to -1:  20 -0.42879927835305054
    prices -1 to -2:  3 -1.5721141625126993
    prices -2 to 3:  4 -2.4158942235473466
    prices <3:  0 nan
    
    trends 0 to 1:  23 -0.06220277723223607
    trends 1 to 2:  9 1.5519738719284009
    trends 2 to 3:  0 nan
    trends 3>:  0 nan
    trends 0 to -1:  18 -0.4898406138886327
    trends -1 to -2:  10 -1.5547354231811998
    trends -2 to 3:  0 nan
    trends <3:  0 nan

    45 DAYS

    NVDA past 45 days open: 134.25, close: 140.11

    QQQ past 45 days open: 530.53, close: 537.23

    some data for 45 day plot:
    
    
    
    shape of plots: 
    prices max/min, trends max/min:  2.0056164520976707 -2.335406431884398 1.7445748897642444 -2.004426294606925
    
    
    zscore means of prices and trends:  4.46309655899313e-15 -1.0288066694859784e-14
    
    NVDA prices length and mean of positive/negative:
    prices positive len:  24 0.7357314773777098
    prices negative len:  21 -0.8408359741459445
    
    prices positive list:  [0.054341899730996124, 0.712714915702643, 0.7855730965445773, 0.7471569648279229, 0.36034625926711406, 0.4239315807291669, 2.943764882851058e-05, 0.5325565048935021, 1.3485681303231487, 2.0056164520976707, 0.7749755429675673, 0.7710014603761928, 0.2146298975832493, 0.2583448060884106, 0.45307485306593986, 0.8663794425692682, 1.6929886215759211, 1.7128590345328127, 1.1034997038548302, 0.13249885736143355, 0.6040899915383078, 0.6769481723802421, 0.6544283710290971, 0.7710014603761928]
    
    prices negative list:  [-0.7645562745724367, -1.014925110927081, -0.5261615999565064, -0.06787777341171075, -0.10162313831168997, -0.8581724481659111, -1.334961797397784, -1.443817813204148, -0.5348700812210075, -0.9049805349626544, -0.8951834935400782, -1.7747401012554962, -1.9521754070198687, -2.004426294606925, -0.739519390936971, -1.1335781681560086, -0.19088507127290136, -1.010570870294818, -0.18326515016646283, -0.28994404565668896, -0.048283690566570704, -0.1310142625794062, -0.5860324086500015, -0.0624349726213975]
    
    pos/neg length and mean:
    trends positive len:  21 0.8835238047359075
    trends negative len:  24 -0.7730833291439385
    
    trends positive list:  [0.7659593076650326, 1.5976192684256507, 1.3396305109645679, 0.7039113786554122, 0.6647232129651197, 0.11391177298491237, 0.14221433709456596, 0.8726382031552712, 0.9956455010164618, 0.6679888934393077, 0.10629185187847384, 0.3653691694976192, 0.6647232129651197, 0.6266236074328899, 0.4894650275168725, 0.5232103924168393, 1.3505161125452068, 1.5954421481095253, 1.7282464873932923, 1.7445748897642444, 1.4952946135676752]
    
    trends negative list:  [-0.7645562745724367, -1.014925110927081, -0.5261615999565064, -0.06787777341171075, -0.10162313831168997, -0.8581724481659111, -1.334961797397784, -1.443817813204148, -0.5348700812210075, -0.9049805349626544, -0.8951834935400782, -1.7747401012554962, -1.9521754070198687, -2.004426294606925, -0.739519390936971, -1.1335781681560086, -0.19088507127290136, -1.010570870294818, -0.18326515016646283, -0.28994404565668896, -0.048283690566570704, -0.1310142625794062, -0.5860324086500015, -0.0624349726213975]
    
    zscore silos: 
    prices 0 to 1:  19 0.5159193883045129
    prices 1 to 2:  4 1.4639440396810512
    prices 2 to 3:  1 2.005079787550808
    prices 3>:  0 nan
    prices 0 to -1:  14 -0.38097024812529595
    prices -1 to -2:  4 -1.454678571075471
    prices -2 to 3:  3 -2.1703421886682137
    prices <3:  0 nan
    
    trends 0 to 1:  13 0.18729980332171914
    trends 1 to 2:  8 1.5002054504426652
    trends 2 to 3:  0 nan
    trends 3>:  0 nan
    trends 0 to -1:  16 -0.42194589789701936
    trends -1 to -2:  7 -1.3777869923741477
    trends -2 to 3:  1 -2.0051154449596136
    trends <3:  0 nan

    30 DAYS

    NVDA past 30 days, during which it took a major dive with the release of DeepSeek close: open: 140.14, close: 140.11

    QQQ past 30 days open: 515.18, close: 537.23

    Some Data for 30 day plot
    
    shape of zscore plot
    prices max/min, trends max/min:  1.7395624418627935 -2.0102455727613866 1.635662150597431 -1.980244163451975
    
    beats_condition, max, min:  False True False
    avg zscores list:  [-0.04900609322786642, -0.043237954955050006, -1.4069222487518587, -1.9069027691119136, -2.1376722944324356, -0.3679613081523311, -1.0756478274354047, 0.34156933192037153, 1.0456770404126563, 2.5158374261919283, 2.6528832614306106, 1.7724244766420925, -2.8159600623456074, -0.7210526618453241, -1.4730439252605447, -1.1233948714886564, -1.765168780240289, -2.622450849103493, -1.7105258315300362, -0.7023407103306647, 0.0587929352193316, -0.500214456769451, 0.6220642834292512, 0.39529360882055975, 0.2241537498518348, 1.531306979740637, 2.2043609778904045, 2.3999373023930133, 2.3948264891602498, 2.2623747818778503]
    zscore differential avg:  -4.4704980458239636e-15
    zscore means of prices and trends:  -7.919590908992784e-16 -3.796962744218036e-15
    
    trend count and mean of pos or neg:
    prices positive len:  17 0.7172287353811013
    prices negative len:  13 -0.9379145001137494
    
    prices positive list:  [0.8708241976370331, 0.8671431033818425, 0.35178990765469526, 0.022945487524039902, 0.3922819444618296, 0.06466455574957113, 0.5726555629663304, 0.9554893655064959, 1.7211569705868235, 1.7395624418627935, 1.1751279893997306, 0.06466455574957113, 0.2757139597140209, 0.7125371446636966, 0.780023872675586, 0.7591643385628187, 0.8671431033818425]
    
    prices negative list:  [-0.9198302908648995, -0.9103810583368925, -1.758712156406554, -1.9298482566359534, -1.980244163451975, -0.7602432526141607, -1.1403123831849757, -0.23108623104595882, -1.0216720192222633, -0.22373682796862931, -0.326628471051326, -0.09354740202724439, -0.1733409211526078, -0.6122052763421065, -0.10719629345658256]
    
    trend count and mean of pos or neg:
    trends positive len:  15 0.8125990002508011
    trends negative len:  15 -0.8125990002508086
    
    trends positive list:  [0.09018767490616038, 0.7946804556051048, 0.9133208195678171, 0.5972964872423618, 0.055540488970154546, 0.30542019359958456, 0.5941467430663635, 0.5573997276796802, 0.42511047228762966, 0.45765782877296995, 1.2555930200266159, 1.4918238332267078, 1.6199134297174271, 1.635662150597431, 1.395231678496008]
    
    trends negative list:  [-0.9198302908648995, -0.9103810583368925, -1.758712156406554, -1.9298482566359534, -1.980244163451975, -0.7602432526141607, -1.1403123831849757, -0.23108623104595882, -1.0216720192222633, -0.22373682796862931, -0.326628471051326, -0.09354740202724439, -0.1733409211526078, -0.6122052763421065, -0.10719629345658256]
    
    zscore silos: 
    prices 0 to 1:  14 0.5195964415593288
    prices 1 to 2:  3 1.5885045560705986
    prices 2 to 3:  0 nan
    prices 3>:  0 nan
    prices 0 to -1:  7 -0.38925651381809206
    prices -1 to -2:  5 -1.4606024370377964
    prices -2 to 3:  1 -2.0120560681267605
    prices <3:  0 nan
    
    trends 0 to 1:  11 0.23112599712881077
    trends 1 to 2:  5 1.4620954964259174
    trends 2 to 3:  0 nan
    trends 3>:  0 nan
    trends 0 to -1:  9 -0.4261916020115731
    trends -1 to -2:  4 -1.522227807745561
    trends -2 to 3:  1 -2.0472318717466145
    trends <3:  0 nan

    15 DAYS

    NVDA past 15 days during which price regained momentum and climbed back up. open: 124.65, min on day81 at 116.64, close: 140.11

    QQQ past 15 days open: 523.05, close: 537.23.

    Some Data for 15 day plot
    
    shape of z-score: 
    prices max/min, trends max/min:  1.2978590301422221 -1.7892515585278195 1.4875489718437718 -1.7015455028399435
    
    
    prices positive len:  8 0.7961212804164766
    prices negative len:  7 -0.9098528919045411
    prices positive list:  [0.4368908744960763, 0.3355230641218386, 0.11698986253581116, 0.6633228665008721, 1.1319844313480085, 1.2043900101867528, 1.1820101040002302, 1.2978590301422221]
    
    prices negative list:  [-0.9657152036835966, -1.0789198650922653, -1.7015455028399435, -0.7542012310515969, -0.3996918966402359, -0.9850791589245527, -0.04220349219180732, -0.22988490452723273, -0.18370931895265175]
    
    trends positive len:  6 1.056825095650698
    trends negative len:  9 -0.7045500637670981
    trends positive list:  [0.009930233456925725, 0.9483372951340528, 1.2834826743044578, 1.465205946565748, 1.4875489718437718, 1.1464454525992316]
    
    trends negative list:  [-0.9657152036835966, -1.0789198650922653, -1.7015455028399435, -0.7542012310515969, -0.3996918966402359, -0.9850791589245527, -0.04220349219180732, -0.22988490452723273, -0.18370931895265175]
    
    zscore silos: 
    prices 0 to 1:  5 0.3387271345868971
    prices 1 to 2:  4 1.13156977020328
    prices 2 to 3:  0 nan
    prices 3>:  0 nan
    prices 0 to -1:  3 -0.4156379102954067
    prices -1 to -2:  3 -1.6576670076204636
    prices -2 to 3:  0 nan
    prices <3:  0 nan
    
    trends 0 to 1:  1 0.584379796953797
    trends 1 to 2:  4 1.3465306598267066
    trends 2 to 3:  0 nan
    trends 3>:  0 nan
    trends 0 to -1:  7 -0.3399223000040687
    trends -1 to -2:  3 -1.315505544516218
    trends -2 to 3:  0 nan
    trends <3:  0 nan

    The above charts and data are generated in the following code snippets.

    This code snippet gets data into Pandas Dataframes from the Alpaca API.

    ############### INIT CEILLI CLASSES ####################     
    
    from classes.stock_list import StockList
    from classes.config import Config
    from classes.alpaca import Alpaca
    from classes.utilities import Utilities
    from classes.market_beat import MarketBeat
    from classes.profit_loss import ProfitLoss
    from classes.plots import Plots
    
    util = Utilities(pd.DataFrame())
    conf = Config(api_key=api_key, api_secret=api_secret, api_base_url=api_base_url, algo_version=ALGO_VERSION)
    mb = MarketBeat(pd.DataFrame(), api_key=api_key, api_secret=api_secret, api_base_url=api_base_url, algo_version=ALGO_VERSION)
    alpa = Alpaca(api_key=api_key, api_secret=api_secret, api_base_url=api_base_url, algo_version=ALGO_VERSION)
    stocks = StockList()
    plots = Plots(pd.DataFrame())
    
    ############## SETTINGS ###################
    #CONSTANTS, see setting.toml for conflicts, set here to overide settings.toml file Constants
    ALGO_VERSION = conf.algo_version
    BASE_CURRENCY = conf.base_currency
    
    ############# LOGGING #################
    import logging
    logging.basicConfig(
        filename="logs/charts_"+ALGO_VERSION+".log",
        level=logging.INFO,
        format="%(asctime)s:%(levelname)s:%(message)s"
        )
    
    alpa = Alpaca(api_key=api_key, api_secret=api_secret, api_base_url=api_base_url, algo_version=ALGO_VERSION)
    
    ############################### CONFIGS ###################################################
    # API Credentials alpaca4 edge 
    API_KEY = conf.api_key
    API_SECRET = conf.api_secret
    API_BASE_URL = conf.api_base_url
    SECRET_KEY = API_SECRET
    
    #CONSTANTS
    TIMEZONE_OFFSET = -4.0 #set in config file, this is deprecated, i think
    
    if DEBUG:
        PROCESS_ROWS = 0  #set to low number for debugging, otherwise 1000
    else:
        PROCESS_ROWS = 1000
    
    
    
    ########################### DRIVER #######################################
    date = DATE
    
    from datetime import date
    from datetime import timedelta
    import datetime
    from datetime import datetime, timezone, timedelta
    
    N_DAYS_AGO = 500
    YESTERDAY = 1
    #today = datetime.now()
    today = date.today()    
    n_days_ago = today - timedelta(days=N_DAYS_AGO)
    one_day_ago = today - timedelta(days=YESTERDAY)
    
    today = date.today()
    timezone_offset = -4 # EST is -4, that is 4 hours behind GMT
    tzinfo = timezone(timedelta(hours=timezone_offset))
    now = datetime.now(tzinfo)
    back_time = now - timedelta(minutes=15)
    date = back_time.strftime("%Y-%m-%d %H:%M:%S")
    start_time = now - timedelta(minutes=45)
    start = start_time.strftime("%Y-%m-%d %H:%M:%S")
    end = date
    
    beg_date = str(n_days_ago) + ' 00:00:00'
    end_date = str(one_day_ago) + ' 23:59:00'
    
    
    
    
    
    if MODE == 'SCREENER' or MODE == 'HISTORICAL':
        try:
    
            #STOCK_LIST = stocks.TECH_AL
            STOCK_LIST = ['NVDA', 'MSFT']
            
    
            
            STOCK_SET = set(STOCK_LIST) #remove duplicates from list
            STOCK_LIST = list(STOCK_SET)
            STOCK_LIST = sorted(STOCK_LIST)
            symbol_list = STOCK_LIST
    
            
            index_symbol = stocks.stock_index(ALGO_VERSION)
            
    
            cnt = 0
            for symbol in symbol_list:
                print(ALGO_VERSION)
                print(symbol)
                print(index_symbol)
                
    
                hundred_dates = alpa.get_calendar(str(n_days_ago), str(one_day_ago))
    
    
                #get prices for symbol in trading list
                symbol_price_data = alpa.stockbars_by_symbol_by_day(symbol, beg_date, end_date)
                symbol_price_data = symbol_price_data.reset_index(level=("symbol", "timestamp"))
                prices_data = symbol_price_data
                #get prices for trend index for symbol above
                index_price_data = alpa.stockbars_by_symbol_by_day(index_symbol, beg_date, end_date)
                index_price_data = index_price_data.reset_index(level=("symbol", "timestamp")) #alpaca dataframe return has an index of symbol, timestamp format
                column_names = index_price_data.columns
                trends_data = index_price_data
                symbol_prices = symbol_price_data
    
    
                column_names = prices_data.columns
                print(column_names)
                prices_data = symbol_prices[['timestamp', 'symbol', 'open', 'close', 'vwap']].copy()
                trends_data = trends_data[['timestamp', 'symbol', 'open', 'close', 'vwap']].copy()
                prices_data.rename(columns = {'timestamp':'date'}, inplace = True)
                trends_data.rename(columns = {'timestamp':'date'}, inplace = True)
    
                #prices_data = prices_data.reset_index()
                #trends_data = trends_data.reset_index()
                date_stamp = prices_data.iloc[0]['date']
        
    
    
                print()
                print()
                print("Statistical Analysis: ")
    
    
                prices_arr = np.array(prices_data['close'])
    
                from scipy.stats import skew, kurtosis 
                # Calculate the skewness 
                print("Symbol Prices skew: ")
                print(skew(prices_data['close'], axis=0, bias=True))
                print("Index skew: ")
                print(skew(trends_data['close'], axis=0, bias=True))
    
                # Calculate the kurtosis 
                print("Symbol Prices kurtosis: ")
                print(kurtosis(prices_data['close'], axis=0, bias=True))
                print("Index kurtosis: ")
                print(kurtosis(trends_data['close'], axis=0, bias=True))
    
                print()
                print("Covariance between the two: ")
                cov_matrix = np.stack((prices_data['close'], trends_data['close']), axis = 0) 
                print(np.cov(cov_matrix))
    
                print()
                print("Correlation between the two: ")
                correlations = np.correlate(prices_data['close'], trends_data['close']) 
                print(correlations)
    
                print()
                print()
                
                print("Mean of the Symbol: ")
                data_mean = np.mean(prices_data['close'])
                data_max = max(prices_data['close'])
                data_min = min(prices_data['close'])
                print("mean is: " + str(data_mean))
                print("max/min is: "+str(max(prices_data['close'])), str(min(prices_data['close'])))
                
                print()
                print()
    
                print("Variance of the Symbol: ")      
                m = sum(prices_data['close']) / len(prices_data['close'])
                std_dev = np.std(prices_data['close'])
                print("std dev: "+str(std_dev))
    
                import scipy.stats as scipy
                zscore_list = scipy.zscore(prices_data['close'])
                print("symbol z-scores list: ")
                print(zscore_list)
                trends_zscore_list = scipy.zscore(trends_data['close'])
                print("trends z-scores list: ")
                print(trends_zscore_list)
    
                
                import statistics
                # Calculate the variance from a sample of data
                data_variance = statistics.variance(prices_data['close'])
                print("variance result: "+str(data_variance))
    
                print()
                print()
    
                print("Market Beat Metrics: ")
                #prices = prices_data.iloc[:lookback_period]
                vars, vibe_check = mb.compare_rates(trends_data, prices_data)
                vars15, vibe_check15 = mb.compare_rates(trends_data[-15:], prices_data[-15:])
                vars30, vibe_check30 = mb.compare_rates(trends_data[-30:], prices_data[-30:])
                vars45, vibe_check45 = mb.compare_rates(trends_data[-45:], prices_data[-45:])
                vars60, vibe_check60 = mb.compare_rates(trends_data[-60:], prices_data[-60:])
    
                print(vars)
    
    

    the zscores are put into silos based on standard deviation in a Market Beat Class function, a snippet from that is following, which appends the zscore value to a list based on each silo or bin, I included the logic here because it can be beneficial to be able to sort these ito bins:

                 
                    if current_idx_z >= 0:
                        trends_positive.append(current_idx_z)
                    else:
                        trends_negative.append(current_idx_z)
    
                    if current_price_z >= 0:
                        prices_positive.append(current_price_z)
                    else:
                        prices_negative.append(current_price_z)
    
    
    
                    if current_price_z > 0 and current_price_z < 1:
                          prices_0to1.append(current_price_z)
                    elif current_price_z > 1 and current_price_z < 2 :
                          prices_1to2.append(current_price_z)
                    elif current_price_z > 2 and current_price_z < 3:
                          prices_2to3.append(current_price_z)
                    elif current_price_z > 3 and current_price_z < 8:
                          prices_3up.append(current_price_z)
                    elif current_price_z <= 0 and current_price_z > -1:
                          prices_0toneg1.append(current_price_z)
                    elif current_price_z <= 1 and current_price_z > -2:
                          prices_neg1toneg2.append(current_price_z)
                    elif current_price_z <= 2 and current_price_z > -3:
                          prices_neg2toneg3.append(current_price_z)
                    elif current_price_z <= 3:
                          prices_neg3.append(current_price_z)
    
    
                    if current_idx_z >= 0 and current_idx_z < 1:
                          trends_0to1.append(current_price_z)
                    elif current_idx_z >= 1 and current_idx_z < 2:
                          trends_1to2.append(current_idx_z)
                    elif current_idx_z >= 2 and current_idx_z < 3:
                          trends_2to3.append(current_idx_z)
                    elif current_idx_z >= 3:
                          trends_3up.append(current_idx_z)
                    elif current_idx_z < 0 and current_idx_z > -1:
                          trends_0toneg1.append(current_idx_z)
                    elif current_idx_z < 1 and current_idx_z > -2:
                          trends_neg1toneg2.append(current_idx_z)
                    elif current_idx_z < 2 and current_idx_z > -3:
                          trends_neg2toneg3.append(current_idx_z)
                    elif current_idx_z < -3 and current_idx_z > -8:
                          trends_neg3.append(current_idx_z)
                                  

    The graphing part is handled in a Plots Class that is called by this code:

    
                path_15_index = 'plots/stats/zscores/scatter/'+str(today)+'_'+index_symbol+'_15.png'
                print(path_15_index)
                isFile = os.path.isfile(path_15_index)
                if isFile == False:
                    symbol_zscores_plot = plots.zscores_scatter_by_day(today, index_symbol, trends_data['zscores'][-15:], '15')
                else:
                    print(index_symbol + ' zscores scatter plot file exists for this date')
                

    Then in the plots class I generate the plots:

    in the Plots Class:
    
    
    
        def zscores_scatter_by_day(self, plot_date, symbol, data, periodicity='all'):
                plot_date = str(plot_date)
                zscores = data.reset_index(drop = True)
        
                #zscores = zscores.tolist()
                print(zscores)
                # PLOTTING
                import matplotlib.pyplot as plt
                zscores_set = set(zscores) #remove duplicates from list
                zscores_list = list(zscores_set)
                zscores_list = sorted(zscores_list)
                print("zscores sorted and unique: ", zscores_list)
                import seaborn as sns
                sns.displot(zscores_list, color="maroon")
                plt.xlabel("zscore", labelpad=14)
                plt.ylabel("probability of occurence", labelpad=14)
                plt.title("Percent Ratio Z-scores distribution" + plot_date, y=1.015, fontsize=10);
                #plt.show()
                plt.savefig('plots/stats/zscores/'+symbol+'_'+str(plot_date)+'_'+periodicity+'.png',bbox_inches='tight')
                plt.clf()
    
                import matplotlib.pyplot as plt2
    
                x_cnt = 0
                color = 'grey'
                # https://matplotlib.org/stable/gallery/color/named_colors.html
                for i in zscores:
                    if i < 0 and i > -1:
                        color = 'orange'
                    elif i < -1 and i > -2:
                        color = 'indianred'
                    elif i < -2 and i > -3:
                        color = 'firebrick'
                    elif i < -3 and i > -4:
                        color = 'maroon'
    
                    elif i > 0 and i < 1:
                        color = 'yellow'
                    elif i > 1 and i < 2:
                        color = 'green' 
                    elif i > 2 and i < 3:
                        color = 'forestgreen'
                    elif i > 3 and i < 4:
                        color = 'darkgreen' 
                    elif i > 4 and i < 5:
                        color = 'darkolivegreen'
                    elif i > 5:
                        color = 'black' 
    
                    print(zscores)
    
                    plt2.scatter(i, zscores[x_cnt], c=color) 
                    x_cnt += 1
                # depict first scatted plot
                #plt.scatter(x, y, c='blue')
                print('plots/stats/zscores/scatter/'+str(plot_date)+'_'+symbol+'_'+periodicity+'.png')
                plt2.savefig('plots/stats/zscores/scatter/'+str(plot_date)+'_'+symbol+'_'+periodicity+'.png',bbox_inches='tight')
                plt2.clf()
                # depict illustration
                #plt.show()
    
    

    this function outputs the plots into a directory for safe keeping and reference as needed. The first part of the function generates the zscore bar charts and the second part of the function generates the rainbow spectrum charts of zscores. You have to pass in a dataframe of zscores to be plotted, plus the other apparent variables that are easy to figure out for oneself. You’ll need to include these libraries in your own code for this function to work.

    import pandas as pd
    import numpy as np
    import matplotlib.pyplot as plt
    import plotly.graph_objects as go
    import plotly.express as px
    from plotly.subplots import make_subplots
    from plotly.offline import iplot, init_notebook_mode
    
    import seaborn as sns
    
        def zscores_scatter_by_day(self, plot_date, symbol, data, periodicity='all'):
    ...
    ...
    ...

    I hope this can provide some insights to others on how to plot z-scores and work with pandas.

  • Sveltekit Auth and Crud

    As an older developer that is used to the LAMP stack getting used to the new Javascript Frameworks was a bit challenging. One thing that I had found difficult was finding documentation on building out Auth and Crud functionality for the Sveltekit Framework, which is the latest fast kid on the block that is smashing it as far as page load goes compared to more established JS Frameworks.

    (source: https://www.softermii.com/blog/why-sveltejs-is-the-most-in-demand-framework-for-web-development )

    Do to sveltekit being a new kid on the block there aren’t many established code examples or tutorials on how to do such simple things as CRUD and Auth. I found a nice Auth code base on github.com but it didn’t have a walk through of the code base or any CRUD, so the following article is on how to add CRUD to this very nice Auth open source codebase using sveltekit.

    To begin you will need to have some things set up on your development computer, such as Postgresql for the database. Here are the following steps to get things set up.

    How to extend Sveltekit-Auth

    1. download the code base from https://github.com/delay/sveltekit-auth
    2. install postgres on your system if you don’t have it yet, for osx

              

    > brew install postgresql
    
    > brew services start postgresql@14 or psql postgres

      for windows see https://www.postgresql.org/download/windows/

    3. In the postgres cli:

     CREATE USER your_new_username WITH PASSWORD 'your_password';

      4. CREATE DATABASE auth_demo;

      5. add permissions to the database for your postgres username:

    GRANT ALL PRIVILEGES ON DATABASE auth_demo TO auth_user;

      6. open a new cl window or terminal and go to the directory you downloaded sveltekit-auth

    >cd to svelte-kit-master dir

       7. get the db ready, run the migration

               > npx drizzle-kit generate:pg

       8.  push to the db

                 > npx drizzle-kit push:pg


    If you need to make changes to the schema, you can do so and then run the generate and push commands again to update the database.

      9. in the root of the codebase directory copy sample.env to .env file

      10. update settings in .env file

    # rename to .env and put your values in
    # General
    # I used postgresql for the PRISMA_URL for this project 
    # but you should be able to use any DB prisma supports.
    # https://www.prisma.io/docs/reference/database-reference/supported-databases
    DATABASE_URL="postgresql://postgres_user:user_password@localhost:5432/db_name"
    
    # Good video on how to set up OAuth with Google https://www.youtube.com/watch?v=KfezTtt2GsA
    GOOGLE_CLIENT_ID=
    GOOGLE_CLIENT_SECRET=
    
    # Email
    FROM_EMAIL = 'first last <user@domain.com>'
    # use blank values in AWS variables if you want to use SMTP
    #AWS SES KEYS
    AWS_ACCESS_KEY_ID= ''
    AWS_SECRET_ACCESS_KEY= ''
    AWS_REGION= '' # us-east-1
    AWS_API_VERSION= '' # 2010-12-01
    # if AWS SES not set the SMTP will be a fallback
    SMTP_HOST=localhost
    SMTP_PORT=1025
    SMTP_SECURE=0 # use 1 for secure
    SMTP_USER=somethinghere
    SMTP_PASS=somepassword
    
    # Logging
    # Clear these to fallback to console.log
    AXIOM_TOKEN=your-axiom-token
    AXIOM_ORG_ID=your-axiom-org-id
    AXIOM_DATASET = your-axiom-dataset
    

      11. sign up at axiom, npm install @axiomhq/js, create a dataset at axiom, and an api token then add to .env file variables for axiom. see https://jeffmcmorris.medium.com/awesome-logging-in-sveltekit-6afa29c5892c

      12. install svelte app

    > npm install

      13. start the sveltekit server

     > npm run dev

    you should now see a user interface when you open http://localhost:5173 in my case with branding added for my project it looks like this:

    Extending the Codebase and adding CRUD Functionality

    now that the app is up and running we can start to add functionality that allows one to add db tables and add, edit and delete records to these new tables, essential to any application.

    As an example I am going to use creating a settings module for my project. One may want to review the ORM used for db related management, in this case we are using drizzle, https://orm.drizzle.team/docs/get-started-postgresql

    there is a drizzle.config.ts file that defines where drizzle schemas are kept. The schema defines the table for the ORM.

    see /lib/server/database/drizzle.schemas.ts

    add a settings schema to the existing schemas already in that file for user and sessions, in this case one for settings:

    export const settingsTable = pgTable('settings', {
    
        id: text('id').notNull().primaryKey(),
    
        user_id: text('user_id')
    
            .notNull()
    
            .references(() => userTable.id),
    
        key: text('key').notNull(),
    
        secret: text('secret').notNull(),
    
        createdAt: timestamp('created_at', {
    
                withTimezone: true,
    
                mode: 'date'
    
            }).notNull(),
    
        updatedAt: timestamp('updated_at', {
    
                withTimezone: true,
    
                mode: 'date'
    
            }).notNull()
    
    });

    we are going to be working primarily with the key/secret pair which is used for any standard api calls, in this case an api for getting stock data.

    then add a type for settings in the same file and an update type:

    export type Settings = typeof settingsTable.$inferInsert;
    
    export type UpdateSettings = Partial<typeof settingsTable.$inferInsert>;

    then you need to generate and migrate the changes:

    > npx drizzle-kit generate:pg

    after you generate there will be a new migration file that is numbered sequentially in ascending order in /lib/server/database/0000_snapshot.json, 0001_snapshot.json

    then  push changes using:

    > npx drizzle-kit push:pg

    in the cl or terminal window you should see this: 

    > auth % npx drizzle-kit generate:pg

    drizzle-kit: v0.20.14

    drizzle-orm: v0.29.3

    No config path provided, using default ‘drizzle.config.ts’

    Reading config file ‘/Users/michaelmccarron/Desktop/dev/auth/drizzle.config.ts’

    3 tables

    sessions 3 columns 0 indexes 1 fks

    settings 6 columns 0 indexes 1 fks

    users 13 columns 0 indexes 0 fks

    [✓] Your SQL migration file ➜ src/lib/server/database/migrations/0001_gigantic_quasimodo.sql 🚀

    > auth % npx drizzle-kit push:pg

    drizzle-kit: v0.20.14

    drizzle-orm: v0.29.3

    No config path provided, using default path

    Reading config file ‘/Users/michaelmccarron/Desktop/dev/auth/drizzle.config.ts’

    [✓] Changes applied

    before you run this if you went to the postgres cl and did a list tables you see this, 

    auth_demo=# \dt

     public | sessions | table | auth_user

     public | users    | table | auth_user

    then after you do generate and push you see

    auth_demo=# \dt

     public | sessions | table | auth_user

     public | settings | table | auth_user

     public | users    | table | auth_user

    also note that you have a new sql file generated with the additional table information in /src/lib/server/database/migrations/0001_gigantic_quasimodo.sql

    CRUD functions are handled in src/lib/server/database/settings-model.ts, this is where you add the functionality for the database model.

    export const updateSetting = async (id: string, settings: UpdateSettings) => {
    
        const result = await db.update(settingsTable).set(settings).where(eq(settingsTable.id, id)).returning();
    
        if (result.length === 0) {
    
            return null;
    
        } else {
    
            return result[0];
    
        }
    
    };
    
    export const createSetting = async (settings: Settings) => {
    
        console.log("settings in createSetting of settings-model.ts");
    
        console.log(settings);
    
        const result = await db.insert(settingsTable).values(settings).onConflictDoNothing().returning();
    
        if (result.length === 0) {
    
            return null;
    
        } else {
    
            return result[0];
    
        }
    
    };

    User Interface for Settings

    settings is manipulated in the UI at localhost:port/profile/settings/

    you can download the files for the user interface at:

    https://feirmeoirsonrai.me/settings.zip

    this should be copied into the /src/routes/(protected)/profile directory so we get /src/routes/(protected)/profile/settings/…

    in this example we are going to add a typical key and api string for any api type saas setting. 

    the eventual directory structure will be like this:

    src

    — routes

       — (protected) 

          — profile

              +page.svelte

              +page.server.ts

    — settings

    +page.svelte

    +page.server.ts

              — editor

                 +page.svelte

                 +page.server.ts

              — lister

                 +page.svele

                 +page.server.ts

    you place files you do not want to be exposed to the user without a valid session id and their session id matching their user.session information, i.e. they own the content in the interface. these parts of your UI are placed in the (protected) directory. 

    step one is to create a profile directory under (protected) 

    past settings dir into (protected)/profile/

    create file /lib/config/zod-schemas-settings.js

    paste into file: 

    mport { z } from 'zod';
    
    export const settingsSchema = z.object({
        id: z.string().optional(),
        user_id: z.string().optional(),
        key: z
            .string({ required_error: 'key is required' })
            .min(1, { message: 'key is required' })
            .trim(),
        secret: z
            .string({ required_error: 'secret is required' })
            .min(1, { message: 'secret is required' })
            .trim(),
        createdAt: z.date().optional(),
        updatedAt: z.date().optional()
    });
    
    export type SettingsSchema = typeof settingsSchema;
    

    after registering as a user update db since not setting up email protocol yet: 

     UPDATE users SET verified = ‘t’ where email = ‘reg_user@domain.com’;

    restart server after updating

    npm run dev

    When we go to the dashboard section of the site:

    work with two files dashboard/+page.server.ts which initializes things for the view.

    important files to require are:

    import { db } from '$lib/server/db';
    import { settingsTable } from '$lib/server/database/drizzle-schemas';

    these files are used for accessing the database. because in this view we are performing a data check to see if there is a record for the user we need the drizzle-schemas. db  is the equivalent of the DB object in older systems based on PHP, or other backend code bases. 

    to perform a query run:

    const results = await db.select().from(settingsTable).where(eq(settingsTable.user_id, user_id));


    +page.svelte is the User interface file. One important require file is that of a zod schema, which is used for data validattion in the form that is included in the file. 

      import { settingsSchema } from '$lib/config/zod-schemas-settings';
    
      const deleteSettingsSchema = settingsSchema.pick({
        id: true,
        key: true,
        secret: true
      });

    section: working with editing data and updating the db

    if you don’t have any keys then the dashboard provides a link to add a key, this brings up profile/settings/

    in +page.server.ts make sure you have any db and db.schema requirements met. in this case since not reading any data or accessing the db, they are not. 

    in +page.svelte we then include the db schema for writing and the validation zod schemas:

        import * as Form from '$lib/components/ui/form';
        import { settingsSchema } from '$lib/config/zod-schemas-settings';
        import type { SuperValidated } from 'sveltekit-superforms';


    once the reqs are loaded then in the ui we see a form with this code:

    
    <div class="flex items-center justify-center mx-auto max-w-2xl">
        <Form.Root let:submitting let:errors method="POST" {form} schema={createSettingsSchema} let:config>
            <Card.Root>
                <Card.Header class="space-y-1">
                    <Card.Title class="text-2xl">Create Alpaca Trading Keys</Card.Title>
                    <Card.Description
                        >You need a API key/secret from https://alpaca.markets</Card.Description>
                </Card.Header>
                <Card.Content class="grid gap-4">
                    {#if errors?._errors?.length}
                        <Alert.Root variant="destructive">
                            <AlertCircle class="h-4 w-4" />
                            <Alert.Title>Error</Alert.Title>
                            <Alert.Description>
                                {#each errors._errors as error}
                                    {error}
                                {/each}
                            </Alert.Description>
                        </Alert.Root>
                    {/if}
                    <Form.Field {config} name="key">
                        <Form.Item>
                            <Form.Label>Alpaca Key</Form.Label>
                            <Form.Input />
                            <Form.Validation />
                        </Form.Item>
                    </Form.Field>
                    <Form.Field {config} name="secret">
                        <Form.Item>
                            <Form.Label>Alpaca Secret</Form.Label>
                            <Form.Input />
                            <Form.Validation />
                        </Form.Item>
                    </Form.Field>
    
                </Card.Content>
                <Card.Footer>
                    <Form.Button class="w-full" disabled={submitting}
                        >{#if submitting}
                            <Loader2 class="mr-2 h-4 w-4 animate-spin" />
                            Please wait{:else}Update Keys{/if}
                    </Form.Button>
                </Card.Footer>
            </Card.Root>
        </Form.Root>
    </div>
    


    zod tutorial video: https://www.youtube.com/watch?v=L6BE-U3oy80

    once the submit button is pushed after the data is validated using zod schema then the data is inserted into the database as long as the settings schema is defined correctly. 

    Once we have a setting submitted we can then list the settings.

    Listing the Settings and Editing

    in dir /profile/settings/lister

    we do the usual requirements with the addition of one important new piece of code that handles form actions, such as update, or delete. Here we are only concerned with deleting so we add this to the file +server.page.ts:

    export const actions = {
        
    
        delete: async ({request}) => {
            const formData = await request.formData();
            const rec_id = formData.get('id');
            console.log("rec id "+rec_id);
            
    
            console.log('deleting key');
    
            const deletedSetting = await deleteSetting(rec_id);
            console.log("deletedSetting "+deletedSetting);
            if (deletedSetting) {
                console.log("deleted succeessful");
                setFlash(
                    {
                        type: 'success',
                        message: 'keys deleted.'
                    },
                    request
                );
                redirect(302, '/dashboard');
                //message(request, { text: 'deleted!' });
            }
                    
        }
        
    };
    

    this is called from +page.svelte

    we need to add some new code to the file:

    const deleteSettingsSchema = settingsSchema.pick({
        id: true,
        key: true,
        secret: true
      });
    
      type DeleteSettingsSchema = typeof deleteSettingsSchema;
    
      export let form: SuperValidated<DeleteSettingsSchema>;
    


    then in the user interface we can iterate through the settings record rows, although here we are limited to only one:

    <ul>
      {#each data.json_results as alpaca}
        <li>{alpaca.key} - {alpaca.secret} <a href="/profile/settings/editor/{alpaca.id}">[edit]</a>
    
        <form method="POST"  onsubmit="return confirm('sure you want to delete this key?');" action="?/delete">
          <input type="hidden" id="id" name="id" value="{alpaca.id}" />
            <button>Delete</button>
          
          </form>
          
        </li>
      --------- <br />
      {/each}
    </ul>
    
    

    Editing the Settings

    Next we move onto editing. If one were to see in the previous code that there is a link to editing.  This is where we use a new directory [id] which is at /profile/settings/editor/[id]

    see https://svelte.dev/docs/kit/advanced-routing for more information on how this works but basically we are passing in a dynamic ‘id’ variable to display a user interface based on the dynamic argument.  

    for editing we are working with the files in the /profile/settings/editor/[id]/

    in page.server.ts we need to make sure we require the schemas for settings update and other functionality: 

    import { editSetting } from '$lib/server/database/settings-model';
    import { updateSetting } from '$lib/server/database/settings-model';
    import { settingsTable } from '$lib/server/database/drizzle-schemas';


    and again we need to add the actions for editing:

    export const actions = {
        default: async (event) => {
            const form = await superValidate(event, keySchema);
    
    
            if (!form.valid) {
                return fail(400, {
                    form
                });
            }
    
            //add user to db
            try {
                console.log('updating profile');
                const user = event.locals.user;
                if (user) {
                    await updateSetting(event.params.id, {
                        key: form.data.key,
                        secret: form.data.secret
                    });
                    setFlash({ type: 'success', message: 'Keys update successful.' }, event);
                }
    
    
            } catch (e) {
                console.error(e);
                return setError(form, 'There was a problem updating your trading keys.');
            }
            console.log('keys updated successfully');
            return message(form, 'keys updated successfully.');
        }
    };
    
    

    in +page.svelte we make sure we have schema for validation:

    import { settingsSchema } from '$lib/config/zod-schemas-settings';
    
    import type { SuperValidated } from 'sveltekit-superforms';

    and we need:

    const keySchema = settingsSchema.pick({
            key: true,
            secret: true,
        
        });
    
        
    
        type keySchema = typeof keySchema;
    
        export let form: SuperValidated<keySchema>;
        form = data.form;


    and then we have the form itself:

    <div class="flex items-center justify-center mx-auto max-w-2xl">
        <Form.Root let:submitting let:errors method="POST" {form} schema={keySchema} let:config>
            <Card.Root>
                <Card.Header class="space-y-1">
                    <Card.Title class="text-2xl">Edit Alpaca Trading Keys</Card.Title>
                    <Card.Description
                        >update your Alpaca Keys</Card.Description>
                </Card.Header>
                <Card.Content class="grid gap-4">
                    {#if errors?._errors?.length}
                        <Alert.Root variant="destructive">
                            <AlertCircle class="h-4 w-4" />
                            <Alert.Title>Error</Alert.Title>
                            <Alert.Description>
                                {#each errors._errors as error}
                                    {error}
                                {/each}
                            </Alert.Description>
                        </Alert.Root>
                    {/if}
                    <Form.Field {config} name="key">
                        <Form.Item>
                            <Form.Label>Alpaca Key</Form.Label>
                            <Form.Input />
                            <Form.Validation />
                        </Form.Item>
                    </Form.Field>
                    <Form.Field {config} name="secret">
                        <Form.Item>
                            <Form.Label>Alpaca Secret</Form.Label>
                            <Form.Input />
                            <Form.Validation />
                        </Form.Item>
                    </Form.Field>
    
                </Card.Content>
                <Card.Footer>
                    <Form.Button class="w-full" disabled={submitting}
                        >{#if submitting}
                            <Loader2 class="mr-2 h-4 w-4 animate-spin" />
                            Please wait{:else}Update Key/Secret{/if}
                    </Form.Button>
                </Card.Footer>
            </Card.Root>
        </Form.Root>
    </div>
    

    the values for the fields, is from form.data object. note that the name of the form field matches the value of the form.data.field_name

    for example:

    form.data.key should be referenced <Form.Field {config} name=”key”></Form.Field>

    And that is how one adds CRUD to sveltekit code base for this project: sveltekit-auth, as seen adapted for my project Céillí.

  • Building and Coding a Medical Device with C++ and Arduino Controller

    A personal confession up front, I am an highly functioning person with Autistic Spectrum Disorder, otherwise known as Asperger’s Syndrome. Science refers to us as being Neurodivergent which the Oxford Languages Dictionary defines as “differing in mental or neurological function from what is considered typical or normal (frequently used with reference to autistic spectrum disorders)”. Part of my neurological differences is that I am extra sensitive to Electro-Magnetic Fields (EMF), which helped me in my duties as a Signals Intelligence technical analyst on USN Submarines but not so much in everyday life. To provide relief from this extra sensitivity I researched different ways to relieve this ‘pain’. This article is about the creation of my medical device and how I built it using Arduino microcontroller and C++ to regulate the circuit power for the device.

    I will not be covering the science behind the medical device, this is covered in Ch. 9 of my book Battlespace of Mind: AI, Cybernetics and Information Warfare [link] [pdf]

    A brief overview of Arduino microcontrollers is related by Wikipedia:

    Arduino (/ɑːrˈdwiːnoʊ/) is an Italian open-source hardware and software company, project, and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Its hardware products are licensed under a CC BY-SA license, while the software is licensed under the GNU Lesser General Public License (LGPL) or the GNU General Public License (GPL),[1] permitting the manufacture of Arduino boards and software distribution by anyone. Arduino boards are available commercially from the official website or through authorized distributors.[2]

    Arduino board designs use a variety of microprocessors and controllers. The boards are equipped with sets of digital and analog input/output (I/O) pins that may be interfaced to various expansion boards (‘shields’) or breadboards (for prototyping) and other circuits. The boards feature serial communications interfaces, including Universal Serial Bus (USB) on some models, which are also used for loading programs. The microcontrollers can be programmed using the C and C++ programming languages (Embedded C), using a standard API which is also known as the Arduino Programming Language, inspired by the Processing language and used with a modified version of the Processing IDE. In addition to using traditional compiler toolchains, the Arduino project provides an integrated development environment (IDE) and a command line tool developed in Go.

    (source: https://en.wikipedia.org/wiki/Arduino )

    In my device I used an Arduino Uno card which attaches to a bread board that has a simple electronic circuit for regulating electric current to a coil, the coil itself is considered a form of Transcranial Magnetic Stimulation, except in this case it is referred to as Transcerebral Magnetic Stimulation. First, I will cover the wiring of the bread board electric circuit then go over the code file which is used to program the Uno card.

    To begin we will need the following parts list:

    Parts list:

    • Arduino Uno Board, plus necessary connectors (purchase a starter kit online)
    • Arduino Breadboard
    • 1K Ohm Resistor
    • TIP120 to 220 Voltage Regulator
    • Diode 1N4007
    • Red LED (620nm Wavelength)

    Make the Circuit:

    see the breadboard completed image below and make sure your breadboard is oriented the proper way before adding electronic components to the board.

    1. connect LED to pin a21 and a22
    2. on the Arduino Uno out pin 9 to pin b21 infront right leg of LED on bread board. Connect gound pin 9 to Breadboard – negative base.
    3. Take 1K Ohm Resistor and connect to pin b21 in front of LED short leg, connect resistor out to pin f20 in front of 120 TIP V regulator.
    4. Voltage regulator, facing properly the plastic should be facing you with the metal end farthest away from you, metal hole mount to rear, place in pin g20,g19,g18
    5. Ground the regulator, grab a jumper cable short or med red one or other color as you wish. one end connect to negative rail the other end connect to right pin of voltage regulator place in j18
    6. Coil In, pin h19 in front of center of V regulator
    7. Diode, (grey strip is out) grey end into + positive pin 20 of breadboard base. In goes to pin j19 behind out to coil. (see video)
    8. Coil out, to + positive on board base rail pin 21, should be next to diode, to the left of the diode one pin location.

    Then your finished product should look like this:

    Program the Uno Card:

    the next step is to add the C++ logic to the card so.

    1. Connect your card via usb to your computer
    2. Open the Arduino IDE, download this at https://www.arduino.cc/en/software
    3. get a copy of the arduino code file from my github repo: https://github.com/autonomous019/ahronov-bohm-cybersecurity/blob/main/accelerate_frequency_driver.ino
    4. then upload it to the board, you can search the internet for plenty of videos on how to upload a sketch file to your card. Be sure to select the proper board and serial port for your board in the IDE.
    5. connect the power pin on the card to your bread board, use a jumper cable to connect pin9 on the card to in front of the led pin that is the in to the led, this is the long leg of the led.
    6. connect the card ground (“GRD”) to your bread board negative rail (-).

    if you were succesfull you should see a flashing red led light on the bread board.

    Some things to note about the sketch file, which is written in C++:

    int ledPin = 9;
    int sensorValue;     // variable to stores data
    const int adc = 0 ;   //naming pin 0 of analog input side as ‘adc’
    
    void setup() {
      
      pinMode(9, OUTPUT);
      Serial.begin(9600);  //init serial comm
      
    }

    this bit of code initializes pin 9 as the power source to the bread board.

    //add as many 'pins' or 'counters' to add random config to em waves for security, prevents others from hijacking your shield
    int random_key_1 = random_delay_time(6,30);
    int random_key_2 = random_delay_time(6,30);
    int random_key_3 = random_delay_time(6,30);
    int random_key_4 = random_delay_time(6,30);

    this code block sets a small security ‘salt’ to create a random time delay in the looping of the circuit power.

    The circuit uses a descending time delay from 24ms down to 6ms.

    int phaser_a = phase_former(24,3);
    int phaser_b = phase_former(22,3);
    int phaser_c = phase_former(20,3);
    int phaser_d = phase_former(18,3);
    int phaser_e = phase_former(16,3);
    int phaser_f = phase_former(14,3);
    int phaser_g = phase_former(12,3);
    int phaser_h = phase_former(10,3);
    int phaser_i = phase_former(8,3);
    int phaser_k = phase_former(6,3);
    
    
    }// ends loop()
    
    
    int phase_former(int delay_time, int point_duration) {
      int v = 0;
      digitalWrite(9,HIGH);
      delay(point_duration);
      digitalWrite(9,LOW);
      delay(delay_time);
      return v; 
    }

    we set up specific phases for the circuit then call those phases in phase_former() func.

    If you want to learn how to make the coil and want more information on the overall project see the github repo at https://github.com/autonomous019/ahronov-bohm-cybersecurity

  • Coding a Dynamic Datadriven Front End Application using only Javascript and HTML5

    In the following I will be going over the CSS, HTML5 and Javascript used to code my cute little app that ranks American Football player Offensive Lineman according to data provided by the NFL in one of their Kaggle Challenges. This app is called “O-Line” which you can see at https://feirmeoirsonrai.me/oline/

    And if you want to geek out you can read my paper on the rankings system at Defending the Edge: Evaluating OT Performance through Euclidean Measurements 

    To get a picture of the different components involved in the User Interface for this web app we can take a quick look at the code of the <head> element in the index page HTML:

        <head>
            <meta charset="utf-8" />
            <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" />
            <meta name="description" content="american football data analytics" />
            <meta name="author" content="michael mccarron macciarain@protonmail.com" />
    
    
            <meta http-equiv="cache-control" content="max-age=0" />
            <meta http-equiv="cache-control" content="no-cache" />
            <meta http-equiv="expires" content="0" />
            <meta http-equiv="pragma" content="no-cache" />
    
            <title>0-Line: NFL Analytics</title>
            
            <!-- JS JS JS JS JS-->
            <script src="./js/jquery-3.6.3.min.js"></script>
            <script src="./js/bootstrap.min.js"></script>
            <!-- Plotly.js -->
            <script src="./js/plotly-latest.min.js"></script> <!-- plotly vers plotly.js v1.58.5-->
            <script src="./js/d3.v7.min.js"></script>
    
            <!-- Favicon-->
            <link rel="icon" type="image/x-icon" href="assets/img/logos/o.ico" />
            <!-- Font Awesome icons (free version)-->
            <script src="fontawesome/js/all.js" crossorigin="anonymous"></script>
    
    
    
            <!-- CSS CSS CSS -->
            <!-- Google fonts-->
            <!-- Bootstrap version: Bootstrap v5.1.3 (https://getbootstrap.com/) -->
            <link href="css/montserrat.css" rel="stylesheet" type="text/css" />
            <link href="css/roboto.css" rel="stylesheet" type="text/css" />
            <!-- Core theme CSS (includes Bootstrap)-->
            <link rel="stylesheet" href="./css/bootstrap.min.css">
            <link href="css/styles.css" rel="stylesheet" />
            <link href="fontawesome/css/all.css" rel="stylesheet" />
            
        </head>

    Let’s first note the Javascript libraries used in this project, they are Bootstrap, jQuery, Plotly and D3. We shall go over these libraries momentarily. Next we shall notice the CSS libraries used such as Bootstrap again, Fontawesome and the custom css for this project in css/styles.css.

    Bootstrap

    Bootstrap (formerly Twitter Bootstrap) is a free and open-source CSS framework directed at responsive, mobile-first front-end web development. It contains HTMLCSS and (optionally) JavaScript-based design templates for typographyformsbuttonsnavigation, and other interface components.

    As of May 2023, Bootstrap is the 17th most starred project (4th most starred library) on GitHub, with over 164,000 stars. According to W3Techs, Bootstrap is used by 19.2% of all websites.

    https://en.wikipedia.org/wiki/Bootstrap_(front-end_framework)

    The thing I like about Bootstrap is it’s popularity, which means it’s tried and trusted by many front-end developers, as well there are plenty of tutorials on all aspects of Bootstrap. It does have some advanced features such as Sass and CSS variables.

    One can get started with Sass at the W3Schools site, something I have used often in my coding life, they relate:

    • Sass stands for Syntactically Awesome Stylesheet
    • Sass is an extension to CSS
    • Sass is a CSS pre-processor
    • Sass is completely compatible with all versions of CSS
    • Sass reduces repetition of CSS and therefore saves time
    • Sass was designed by Hampton Catlin and developed by Natalie Weizenbaum in 2006
    • Sass is free to download and use

    CSS variables can be seen in the following quick example:

    <html>
    <head>
        <style>
            :root {
                --main-bg-color: lightblue;
                --main-text-color: darkblue;
            }
            body {
                background-color: var(--main-bg-color);
                color: var(--main-text-color);
            }
        </style>
    </head>
    <body>
        <h1>Welcome to My Website</h1>
        <p>This is a sample paragraph demonstrating CSS variables.</p>
    </body>
    </html>
    

    we call the variable through the syntax: var(), then define the values of the function in the css pseudo class :root{}. A css pseudo class is:

    A pseudo-class is used to define a special state of an element.

    For example, it can be used to:

    • Style an element when a user moves the mouse over it
    • Style visited and unvisited links differently
    • Style an element when it gets focus
    • Style valid/invalid/required/optional form elements

    Many of these things used to be accomplished by using the old jQuery library that we older developers are more used to but if your starting out read the previous link. 🙂

    In my css files for example I use the :root pseudo class in the following way:

    :root {
      --bs-blue: #0d6efd;
      --bs-indigo: #6610f2;
      --bs-purple: #6f42c1;
      --bs-pink: #d63384;
      --bs-red: #dc3545;
    ...
      --bs-body-bg: #fff;
    }
    
    <!-- file: css/styles.css -->
    <!-- note the use of '...' just is filler for missing code -->

    then for instance in the styles.css file later on when defining attributes of the <body> element we use the variable call:

    body {
      ...
    
      background-color: var(--bs-body-bg);
      ...
    }

    This makes writing the css alot easier and more succinct, taking less time once you are used to the syntax and overall operations of bootstrap. Back in the day when the WWW was in it’s infancy, when I got started, things like this would keep one busy for hours on end as each element would need to be individually coded up instead of using classes and variables.

    Fontawesome is a convenient library to deal with icons, you can read more about it at https://fontawesome.com/

    The main javascript libraries used in this comprised of Bootstrap, which is used for such common UI elements like the carousel, styling forms and buttons etc programmatically without having to create one’s own functions to achieve those things. The two main snazzy libraries used for some uncommon UI elements such as plots and animations is that of Plotly and d3.

    I use d3 version 7 in this project, a brief overview of d3.js:

    D3.js (also known as D3, short for Data-Driven Documents) is a JavaScript library for producing dynamic, interactive data visualizations in web browsers. It makes use of Scalable Vector Graphics (SVG), HTML5, and Cascading Style Sheets (CSS) standards. It is the successor to the earlier Protovis framework. Its development was noted in 2011, as version 2.0.0 was released in August 2011. With the release of version 4.0.0 in June 2016, D3 was changed from a single library into a collection of smaller, modular libraries that can be used independently.

    The D3.js library uses pre-built functions to select elements, create SVG objects, style them, or add transitions, dynamic effects, or tooltips. These objects can also be styled using CSS. Large datasets can be bound to SVG objects using D3.js functions to generate text/graphic charts and diagrams. The data can be in various formats such as JSONcomma-separated values (CSV) or geoJSON, but, if required, JavaScript functions can be written to read other data formats.

    (source: https://en.wikipedia.org/wiki/D3.js )

    Later, I shall go into the technical details of implementing a data-driven web app relying only on Javascript with no server-side backend, such as Ruby, Python, PHP, ASP or Java, using JS and CVS only.

    Plotly is a javascript library that provides a UI for web based data visualizations, many Python based Data Science developers are familiar with Plotly in Python, in this case it is for HTML. Built on top of d3.js and stack.gl, Plotly.js is a high-level, declarative charting library. plotly.js ships with over 40 chart types, including 3D charts, statistical graphs, and SVG maps. plotly.js is free and open source and you can view the source, report issues or contribute on GitHub.

    Now that we have an overview of the CSS and JS involved in this web app, we can start to take a deeper dive into the code. As previously stated this web application is one that is based solely in HTML5, CSS and Javascript, though not a single-page-application, which is to me is an overstated desire that often is not functional except for simple sites, it is a app that has no back-end. For those new to the field in the good old days most dynamic sites used such back-end programming languages like Ruby, PHP, Java, etc to serve up dynaic content. This project however does not rely on any of those ‘back-end’ languages but relies on the ‘front-end’ Javascript Language, which is a derivative of ECMAScript, the latest version being ECMAScript 2023 or 14, it should not be confused with the programming language Java, a seperate ball of wax. This project was based in version 6. This project also does not deal with typescript, although in another project I shall be going over TypeScript, which is a strongly typed programming language that builds on JavaScript, for programmers the basic additive of types to JS, which is normally the domain of languages like PHP, Java, etc.

    Now with all that nitty gritty out of the way, we can get to the bone of this cut of meat. The first thing I would like to address is how I dealth with replacing standard url calls for dynamic pages, where ?something=value&anothersomething=anothervalue is encountered which is done using ‘back-end’ languages. I used hashes to seperate my url calls so that the for instance the url call:

    https://feirmeoirsonrai.me/oline/play.html#42476_gameId:2021092603

    has the agrument from the right of the hash tag (‘#’) in the url of the players_id and the the gameid. I created a simple Javascript parser to deal with parsing the hashtag.

    <script>
    
                var hash = $(location).prop('hash').substr(1);
                let uri_args = words = hash.split(':');
                console.log(uri_args);
                player_id = uri_args[0];
                player_id = player_id.split('_');
                player_id = player_id[0];
                game_id = uri_args[1];
                play_id = uri_args[2];
                console.log("game id: "+game_id);
                console.log("play_id: "+play_id);
                console.log("player id: "+player_id);
                                     
    </script>
    

    the variable hash retrieves the url.location ‘object’ using jQuery, a bit outdated method but one I am used to, and then using the split method based on the colon symbol ‘:’ divides the string into two parts, the first part uri_args[0] gives us the player_id which is ‘42476’ and the second part gives us the game_id which is 2021092603

    This then serves up the 18 plays for the game of the player:

    For serving up data, I have parsed all my data into csv file format and json format. For instance, the data for https://feirmeoirsonrai.me/oline/team.html#ARI

    is dynamically served up from the JSON file:

    {"2021091207":2.8938053097,"2021091909":2.9089285714,"2021092603":2.9267241379,"2021100309":2.8297546012,"2021101011":2.850877193,"2021101708":2.869140625,"2021102408":2.8168103448,"2021102800":2.8934782609}

    here we have a simple 2 level object notation for each game by game_id and the overall rating of the Offensive line for that game ie. 2.8938053097. Which is then put into the UI for presentation.

    you can learn more about JSON at https://developer.mozilla.org/en-US/docs/Learn_web_development/Core/Scripting/JSON

    The nuts and bolts of presenting this in the UI is handled by Javascript through the following code:

    team_data = [];
    
        $(document).ready(function(){
           
            p_id = player_id;
           
            $.getJSON("./js/json/teams/"+team+"/summary.json", function(data){
                cnt = 0;
                details = document.getElementById( "games" )
                team_name = document.getElementById( "team_name");
                $(team_name).html(team);
                plays_text = "";
    
                for (const key in data) {
                    if (data.hasOwnProperty(key)) {
                      console.log(`${key}: ${data[key]}`);
                      let p_data = `${key}: ${data[key]}`;
                      t_qbi = data[key];
                      team_data.push(t_qbi);
                      game_id = key;
    
                      t_qbi = (Math.round(t_qbi * 100) / 100).toFixed(2);
                      console.log(game_id + " -- "+ t_qbi);
                  
                       
                       var play_plot = "Team QBI for game: "+t_qbi+"  <a class='research-link highlight' href='game.html#"+team+":"+game_id+"'>;   View Game Details: "+game_id+"   </a><br />\n";
                       plays_text = play_plot;
                       $(details).append(plays_text + "<hr>");
    
                    }
                  }
                  const arr_avg = arr => arr.reduce((a,b) => a + b, 0) / arr.length
                  season_qbi_avg = arr_avg(team_data);
                  season_qbi_avg = (Math.round(season_qbi_avg * 100) / 100).toFixed(2);
                  let season_qbi = "<small>Week 1-8 of 2021 Collective QBI:</small> "+season_qbi_avg;
                  console.log(season_qbi);
               
                spin = document.getElementById( "spin" )
                $( spin ).hide();
                team_qbi = document.getElementById("team_qbi");
                $( team_qbi ).html(season_qbi);
    
            }).fail(function(){
                    console.log("An error has occurred.");
                });
            
        });
    
    

    which is in the file js/handlers/teams.js being a JS based project and since their is no privileged user data or any other secure data all my code is open to the public if you have a browser and want to navigate to it one can and then reverse engineer everything oneself.

    As a coder the funniest and most challenging aspect of this application was the plotting of each play, I wanted to show step by step how the team blocked on each play so one could trace out how the play developed. This was envisioned primarily as an educational tool for offensive line players so they could see the overall picture of what constituted a bad rating and a good rating. This is all done using Javascript, JSON and CVS in a HTML5 UI Layer, below we will walk through the code to see how this is done using Plotly. See the following link for a live example: https://feirmeoirsonrai.me/oline/games_plot.html#2021091207:152

    One is able to view the play from moment to moment as it unfolds, so that at time 0 we see:

    and then at a later time at t=37, which is actually 3.7sec into the play:

    One can also see a path trace of the same play I compiled and put in MP4 format:

    https://feirmeoirsonrai.me/oline/plots/plots/2021091207/2021091207_152_all.mp4

    When we look at games_plot.html we see that it relies on a JS handler file to do the heavy lifting, this is in js/handlers/plot11_on_11.js. One can download the code file as is at https://feirmeoirsonrai.me/oline/js/handlers/plot_11on11.js

    The first bit of lifting is getting the data which I use builtin functions of d3 to do:

    Plotly.d3.csv("./csv/all/"+game_id+"/all_"+play_id+".csv", function (data) {
     function iterateObject(obj) {
            for (const key in obj) {
              console.log(obj);
              if (obj.hasOwnProperty(key)) {
                console.log(`${key}: ${obj[key]}`);
              }
            }
          }
    

    this pulls in a dynamaically referenced csv file, where the data is in comma seperated format which is then accessible in the ‘data’ variable. We then set up some simple X,Y coordinates, which is also common to any datascience plot, then create a object to assign the data to specific parts of the object:

           if (!(trace = bytime_frame[role])) {
              trace = bytime_frame[role] = {
                x: [],
                y: [],
                id: [],
                text: [],
                marker: {size: []}
              };
            }

    so that we end up with different arrays, such as x,y, id, text, marker

    we then iterate the data and push the specific data to the right object part:

        // Go through each row, get the right trace, and append the data:
          for (var i = 0; i < data.length; i++) {
            var datum = data[i];
            var trace = getData(datum.time_frame, datum.role);
            trace.text.push(datum.position);
            trace.id.push(datum.position);
            trace.x.push(datum.x);
            trace.y.push(datum.y);
            trace.marker.size.push(20000000);
        
            
          }
        

    then we slice the data up and put into a traces array

          // Create the main traces, one for each role:
          var traces = [];
          for (i = 0; i < roles.length; i++) {
            var data = firsttime_frame[roles[i]];
            var marker_color = "";
            if(i == 0){
                marker_color = '#000000';
            } else {
                marker_color = '#e64a17';
            }
        
            traces.push({
              name: roles[i],
              x: data.x.slice(),
              y: data.y.slice(),
              id: data.id.slice(),
              text: data.text.slice(),
              mode: 'markers',
              marker: {
                size: data.marker.size.slice(),
                sizemode: 'area',
                color: marker_color,
                sizeref: 175000
              }
              
             //type: 'scatter'
            });

    after this we don’t need to do much more custom handling of the data and the rest is part of the standard operating procedure for serving up any plot. See the link above to the js file to see the whole picture.

    This was a fun project to develop and as it was my own project I had free license to do what I wanted with it and decided to have some fun. I hope you enjoy it’s different way of handling data and lack of a ‘back-end’. As an experienced developer can see such a methodology works well for sites not requiring secure logins, for informational sites such a setup can easily replace andy LAMP stack backends and rely solely on front-end technology to get the job done although we are dealing with MBs of data.

  • Statistical Awareness for Data Science

    One of the most fundamental levels of working with Machines is an understanding of statistics, which is the basis of most contemporary methods in computer science. Yet, it can be an impenetrable subject that does involve a committed level of aspiration to try and pierce through the fuzzy layers with maths to turn the typical person off to the subject. This is not an attempt to make it easier to understand this subject but is an abbreviated attempt to get to the meat of the matter at hand.

    Statistics always begins with a discussion of probability and the two concepts of probability: statistical probability and inductive probability. The first being based in quantitative understandings and the latter in qualitative understandings. The first is the one we deal with in ML and Software Engineering. The quantitative stats probability deals with two main laws of probs: addition and multiplication.

    A. The Law of Addition

    “The law of addition of probabilities states that if A and B are mutually exclusive events, that is if they cannot both occur together, then the probability that either A or B will occur is equal to the sum of their separate probabilities: in symbols: P(A or B) = P(A) + P(B)

    This follows from the fact, that if A and B are mutually exclusive, the number of times on which either A or B has occurred is the number of times on which A has occurred plus the number of times on which B has occurred; the same must therefore be true of the corresponding proportions and so, as the number of observations increases, of their limiting values or probabilities. This law can be exteded to any number of events, provided they are all mutually exclusive.” (Bulmer, 1979, 12-3)

    One way to understand this as a coder is to actually code these formulas in a theoretical example. The code we use in this work is based on Python, see my book PlayAI: Machine Learning in Video Game Design for an introduction into setting up a Python development environment if you are not familiar with Python, we will be using DataFrames to work with statistical data, which is what you will use in the real world to work with Machine Learning, although you can also use R language.

    Python example:

    In this example we are working with the second rule of addition in stats.

    # P(A or B) = P(A) + P(B) – P(A and B)

    In a small survey, 32 people responded to the question “Is a hotdog a sandwich?”.

    • 50% of the respondents were female and the rest were male.
    • 11 people responded with ‘yes’ and 21 responded with ‘no’.
    • Of the female participants who took the survey, 5 responded with ‘yes’.

    Below is a DataFrame representing all the responses to the survey:

    import pandas as pd
    
    # creating a Dataframe named 'survey' with 'gender' and 'response' columns
    survey = pd.DataFrame([{'gender': 'female', 'response': 'no'}, {'gender': 'male', 'response': 'yes'}, {'gender': 'female', 'response': 'no'},{'gender': 'female', 'response': 'yes'}, {'gender': 'female', 'response': 'no'}, {'gender': 'female', 'response': 'no'},
    {'gender': 'male', 'response': 'no'}, {'gender': 'female', 'response': 'no'}, {'gender': 'male', 'response': 'no'},{'gender': 'male', 'response': 'yes'}, {'gender': 'male', 'response': 'yes'}, {'gender': 'male', 'response': 'no'},{'gender': 'female', 'response': 'no'}, {'gender': 'female', 'response': 'no'}, {'gender': 'male', 'response': 'no'},{'gender': 'female', 'response': 'yes'}, {'gender': 'male', 'response': 'yes'}, {'gender': 'female', 'response': 'no'},
    {'gender': 'male', 'response': 'yes'}, {'gender': 'male', 'response': 'yes'}, {'gender': 'female', 'response': 'no'},{'gender': 'male', 'response': 'no'}, {'gender': 'female', 'response': 'no'}, {'gender': 'female', 'response': 'yes'},{'gender': 'female', 'response': 'yes'}, {'gender': 'male', 'response': 'no'}, {'gender': 'male', 'response': 'no'},{'gender': 'male', 'response': 'no'}, {'gender': 'female', 'response': 'no'}, {'gender': 'male', 'response': 'no'},{'gender': 'male', 'response': 'no'}, {'gender': 'female', 'response': 'yes'}])
    survey
    
    prob_female_or_yes = len(survey[(survey['gender'] == 'female') (survey['response'] == 'yes')]) / len(survey)
    prob_female_or_yes
    
    P(female) = 0.5
    P(yes) = 11⁄32 = 0.3438
    P(female and yes)
    = P(yes | female) * P(female)
    = 5⁄16 * 0.5 = 0.1563
    
    P(female or yes)
    = 0.5 + 0.34375 - 0.15625
    = 0.6875
    
    

    (source: https://discovery.cs.illinois.edu/guides/Probability/addition-rule/)

    the probability is .6875

    B. The Law of Multiplication

    “The law of multiplication of probabilities states that if A and B are two events, then the probability that both A and B will occur is equal to the probability that A will occur multiplied by the conditional probability that B will occur given that A has occurred, or in symbols P(A and B) = P(A) x (P(B | A)

    Caroline and Victor are feeling adventurous, and want to travel twice this year. However, they are feeling indecisive, and put all of the places they are considering on slips of paper in a hat. Caroline randomly selects 2 slips of paper with destinations from the hat without replacement. What is the probability that both destinations are in Europe?

    Hand Calculations

    The question asks us to find the probability of both events occurring, so we know we need to use the multiplication rule. However, since Caroline is selecting from the hat without replacement, we know the events are dependent. Therefore, we need to use the formula P(A and B) = P(A) * P(B|A) where:

    • P(A) is the probability Caroline chooses a destination in Europe on her first selection. This is the total amount of destinations in Europe (7) divided by the total amount of destinations (21), 7⁄21 or approximately 33.33% or approximately 0.3333.
    • P(B) is the probability Caroline chooses a destination in Europe on her second selection.
    • P(B|A) is the probability Caroline chooses a destination in Europe on her second selection given she chose Europe on her first selection. This is the total amount of destinations in Europe divided by the total amount of destinations after choosing 1 Europe slip, 6⁄20 or 30% or 0.3.

    Therefore:

    • P(A and B) = P(A) * P(B|A) = 7⁄21 * 6⁄20 = 0.1 or 10%
    • The probability that Caroline chooses 2 slips of paper with destinations in Europe from a hat without replacement is 10% (0.1).


    import pandas as pd
    # Creating a DataFrame named 'destinations' with 'city', 'country',and 'continent' columns.
    destinations = pd.DataFrame([
        {'city': 'Bali', 'country': 'Indonesia', 'continent': 'Asia'},
        {'city': 'New Orleans', 'country': 'USA', 'continent': 'North America'},
        {'city': 'Dublin', 'country': 'Ireland', 'continent': 'Europe'},
        {'city': 'Sydney', 'country': 'Australia', 'continent': 'Australia'},
        {'city': 'Paris', 'country': 'France', 'continent': 'Europe'},
        {'city': 'Cape Town', 'country': 'South Africa', 'continent': 'Africa'},
        {'city': 'New York City', 'country': 'USA', 'continent': 'North America'},
        {'city': 'Rome', 'country': 'Italy', 'continent': 'Europe'},
        {'city': 'London', 'country': 'England', 'continent': 'Europe'},
        {'city': 'Tokyo', 'country': 'Japan', 'continent': 'Asia'},
        {'city': 'Barcelona', 'country': 'Spain', 'continent': 'Europe'},
        {'city': 'Rio de Janeiro', 'country': 'Brazil', 'continent': 'South America'},
        {'city': 'Agra', 'country': 'India', 'continent': 'Asia'},
        {'city': 'Cairo', 'country': 'Egypt', 'continent': 'Africa'},
        {'city': 'Berlin', 'country': 'Germany', 'continent': 'Europe'},
        {'city': 'Toronto', 'country': 'Canada', 'continent': 'North America'},
        {'city': 'Seoul', 'country': 'South Korea', 'continent': 'Asia'},
        {'city': 'Lima', 'country': 'Peru', 'continent': 'South America'},
        {'city': 'Copenhagen', 'country': 'Denmark', 'continent': 'Europe'},
        {'city': 'Beijing', 'country': 'China', 'continent': 'Asia'},
        {'city': 'Chicago', 'country': 'Illinois', 'continent': 'North America'}
    ])
    destinations
    
    # probability of selecting Europe with 1st slip
    prob_europe1 =len(destinations[destinations['continent'] == 'Europe']) / len(destinations)
    
    # probability of selecting Europe with 2nd slip
    prob_europe2 =(len(destinations[destinations['continent'] == 'Europe']) - 1) / (len(destinations) - 1)
    
    # answer
    prob_2_europe_slips = prob_europe1 * prob_europe2
    prob_2_europe_slips
    

    Probability is 0.1

    (source: https://discovery.cs.illinois.edu/guides/Probability/multiplication-rule/ )

    3. Random Variables and Probability Distributions

    In Machine Learning we are solving for X by features Y, X is refered to as a random variable- numerical variable which takes different values with different probabilities.

    A simple example of this process is flipping coins:

    X = number of heads. [X is a random variable or function]

    Here, the sample space S = {HH, HT, TH, TT}

    where the sample space is merely the different combinations of features to give us X, a numerical variable which takes different values with different probabilities.

    Two basic components to understanding sample space and finding X are Frequency distributions and probability distribution.

    Frequency distribution- the representation of different values across all results

    (Source: https://www.geeksforgeeks.org/frequency-distribution )

    Probability distribution- the distribution of the frequencies over the bandwidth of events

    (Source: https://datasciencedojo.com/blog/probability-distributions-in-data-science/ )

    cumulative probability Function F(x): the probability that X is less than or equal to some particular value x:


    the cumulative probability function can clearly be calculated by summing the probabilities of all values less than or equal to x:

    discrete random variable will be a step function a continuous random variable are based on measurements not fixed values, the measurement occurs in a range and the graph looks continuous.

    For example, Suppose a die is thrown (X = outcome of the dice).

    Here, the sample space S = {1, 2, 3, 4, 5, 6}.
    The output of the function will be:

    • P(X = 1) = 1/6
    • P(X = 2) = 1/6
    • P(X = 3) = 1/6
    • P(X = 4) = 1/6
    • P(X = 5) = 1/6
    • P(X = 6) = 1/6

    This also satisfies the condition ∑6i=1 P(X = i) = 1, since:
    P(X = 1) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) + P(X = 5) + P(X = 6) = 6 × 1/6 = 1

    Probability density function- f(x) [not F(x)] The area under the density function between any two points, x1 and x2, that is to say the integral of the function between them, represents the probability that the random variable will lie between these two values:

    If dx is a very small intrement in x, so small that the density function is practically constant between x and x+dx, then probability that X will lie in this small interval is very neearly f(x)dx, which is the area of a rectangle with height f(x) and width dx. f(x) may therefore be thought of as representing the probability density at x.

    A continuous probability distribution can also be represented by its cumulative probability function F(x), which, as in the discrete case, specifies the probability that X is less than or equal to x and which is the limiting form of the cumulative frquency diagrm shwoing the proportion of observations up to a given value.

    Frequency distribution and probability distribution are two fundamental concepts in statistics that describe how data points are spread across different values or ranges.

    Frequency Distribution

    frequency distribution is a summary of how often each value or range of values occurs in a dataset. It is typically represented in a table or a graph, showing the frequency (count) of each unique value or interval.

    Example of Frequency Distribution

    Consider a dataset representing the number of pets owned by 20 households:

    • Data: 1, 2, 2, 3, 1, 0, 2, 3, 1, 4, 0, 2, 1, 3, 0, 2, 1, 4, 2, 3

    You can create a frequency distribution table:

    | Number of Pets (x) | Frequency (f) |
    |———————|—————|
    | 0 | 3 |
    | 1 | 4 |
    | 2 | 6 |
    | 3 | 4 |
    | 4 | 2 |

    This table shows how many households have 0, 1, 2, 3, or 4 pets.

    Probability Distribution

    probability distribution describes the likelihood of each possible value occurring in a random variable. It assigns a probability to each value, where the sum of all probabilities equals 1. Probability distributions can be discrete or continuous.

    Example of Probability Distribution

    Using the same dataset of pet ownership, we can convert the frequency distribution into a probability distribution:

    1. Calculate the total number of households: N=20N=20.
    2. Calculate the probability for each number of pets:

    | Number of Pets (x) | Frequency (f) | Probability (P) = f/N |
    |———————|—————|————————|
    | 0 | 3 | 3/20 = 0.15 |
    | 1 | 4 | 4/20 = 0.20 |
    | 2 | 6 | 6/20 = 0.30 |
    | 3 | 4 | 4/20 = 0.20 |
    | 4 | 2 | 2/20 = 0.10 |

    This table shows the probability of randomly selecting a household with a certain number of pets. The probabilities add up to 1:

    0.15+0.20+0.30+0.20+0.10=1.000.15+0.20+0.30+0.20+0.10=1.00

    Summary

    • Frequency Distribution: Counts occurrences of each value in a dataset.
    • Probability Distribution: Assigns probabilities to each value, summing to 1.

    Both distributions are essential for understanding data and making statistical inferences.

    (Source: https://www.quora.com/What-is-frequency-distribution-and-probability-distribution-with-examples )

    Distributions is an important topic in ML and Data Science. In the next section we will take a look at the many different distributions.



    Works Cited:

    Bullmer, M.G. (1979) Principles of Statistics Dover Publications

  • Play AI Bibliography

    Baker, B. (2019) ‘Slitherine’s Command: the wargame transforming operational simulation’ in Analysis https://www.army-technology.com/features/military-simulation-game/

    Calvano, Emilio and Calzolari, Giacomo and Denicolo, Vincenzo and Pastorello, Sergio, (2018) ‘Artificial Intelligence, Algorithmic Pricing and Collusion’ (December 20, 2018). Available at SSRN: https://ssrn.com/abstract=3304991or http://dx.doi.org/10.2139/ssrn.3304991 https://www.technologyreview.com/the-download/612947/pricing-algorithms-can-learn-to-collude-with-each-other-to-raise-prices/ (accessed 2/15/2019)

    De Byl, Penny (2019) The Beginner’s Guide to Artificial Intelligence in Unity.
    A practical guide to programming non-player characters for games.
    Udemy Course: https://www.udemy.com/artificial-intelligence-in-unity/

    De Jonge, Dave (2010) Optimizing a Diplomacy Bot Using Genetic Algorithms, Masters Thesis

    Http://www.ellought.demon.co.uk/dipai

    Fallon, John (2013) Believable Behaviour of Background Characters in Open World Games https://www.scss.tcd.ie/publications/theses/diss/2013/TCD-SCSS-DISSERTATION-2013-024.pdf

    Goodfellow, Ian. Bengio, Yoshua. Courville, Aaron (2016) ‘Deep Learning’ MIT Press online: http://www.deeplearningbook.org 

    Green (2019) ‘What Can Psychopaths Teach us About AI’ online: https://thenextweb.com/artificial-intelligence/2019/03/15/what-can-psychopaths-teach-us-about-ai

    Gruber, Capt. Donald.  (2015) ‘Tactical AI in Real Time Strategy Games’ Air Force Institute of Technology

    Gunn, E.A.A. Craenen, B.G.W., Hart, E.  (2009) ‘A Taxonomy of Video Games and AI’

    Hudlicka, Eva (2008) What Are We Modeling When We Model Emotion?
    Psychometrix Associates, Inc. https://works.bepress.com/eva_hudlicka/8/

    Jagielski, Matthew. (2018)  ‘Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning‘ IEEE Security and Privacy Symposium
    Http://github.com/jagielski/manip-ml
    presentation video: https://youtu.be/ahC4KPd9lSY (accessed 3/6/19)

    Larsen, John Alec. (2016) A Chatbot Service for use in Video Game Development, https://core.ac.uk/reader/396761

    McCarron, Michael J. (2024) Battlespace of Mind: AI and Cybernetics in Information Warfare, Trine Day Publishing, ISBN: 9781634244244 https://drive.google.com/file/d/142VRVDXCo5R4R3C4MQXszDbXOZo4y2Vm/view?usp=sharing

    Millington, Ian. Funge, John (2016) Artificial intelligence for games – 2nd ed. p. cm. Includes index. ISBN 978-0-12-374731-0 

    Osaba, Osonde and Davis, Paul (2017) ‘An Artificial Intelligence/Machine Learning Perspective on Social Simulation: New Data and New Challenges’ Rand Corp. 

    Robertson, Glen. Watson, Ian. (2014) A Review of Real-Time Strategy Game AI University of Auckland

    Sanatan, Marcus. (2019) ‘Theory of Computation: Finite State Machines’, online: https://stackabuse.com/theory-of-computation-finite-state-machines/ (accessed 6/15/19)

    Steinhardt, Jacob. Koh, Pang Wei, Liang, Percy. (2018) ‘Certified Defenses for Data Poisoning Attacks’ Neuro Information Processing Systems Conference online: https://papers.nips.cc/paper/6943-certified-defenses-for-data-poisoning-attacks.pdf

    Surber, Regina (2018) ‘Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats’ ICT4Peace Foundation and the Zurich Hub for Ethics and Technology (ZHET)

    Sutton, R., Barto, A (1998) Reinforcement Learning An Introduction ISBN: 9780262193986 Publisher: The MIT Pres, http://incompleteideas.net/book/ebook/the-book.html

    Tarasenko, Sergey. (2016). Emotionally Colorful Reflexive Games. https://arxiv.org/abs/1101.0820

    Thompson, Tommy (2014) In the Directors Chair lecture 4, https://medium.com/@t2thompson/in-the-directors-chair-the-ai-of-left-4-dead-78f0d4fbf86a

    Thompson, Tommy (2018)  ‘Facing Your Fear’  online: https://aiandgames.com/facing-your-fear/

    The Road to War The AI of Total War (2018b)

     Online: https://www.gamasutra.com/blogs/TommyThompson/20180131/313865/The_Road_To_War__The_AI_of_Total_War_Part_1.php

    Rose, Caroline (2015) Realistic Dialogue Engine for Video Games The University of Western Ontario

    Webb, Amy (2019) ‘The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity’ Business Insider Online: https://www.businessinsider.com/amy-webb-big-nine-artificial-intelligence-2019-2/?r=US&IR=T (accessed 2/25/19)

    Weber et al, (2011) Building Human-Level AI for Real Time Strategy Games, 2011 AAAI Fall Symposium