How to standardize ONE column in Spark using StandardScaler?

2024/9/25 4:39:26

I am trying to standardize (mean = 0, std = 1) one column ('age') in my data frame. Below is my code in Spark (Python):

from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml import Pipeline# Make my 'age' column an assembler type:
age_assembler = VectorAssembler(inputCols= ['age'], outputCol = "age_feature")# Create a scaler that takes 'age_feature' as an input column:
scaler = StandardScaler(inputCol="age_feature", outputCol="age_scaled",withStd=True, withMean=True)# Creating a mini-pipeline for those 2 steps:
age_pipeline = Pipeline(stages=[age_assembler, scaler])
scaled = age_pipeline.fit(sample17)
sample17_scaled = scaled.transform(sample17)
type(sample17_scaled)

It seems to run just fine. And the very last line produces: "sample17_scaled:pyspark.sql.dataframe.DataFrame"

But when I run the line below it shows that the new column age_scaled is of type 'vector': |-- age_scaled: vector (nullable = true)

sample17_scaled.printSchema()

How can I calcualate anything using this new column? For example, I can't calculate a mean. When I try, it says it should be 'long' and not udt.

Thank you very much!

Answer

Just use plain aggregation:

from pyspark.sql.functions import stddev, mean, colsample17 = spark.createDataFrame([(1, ), (2, ), (3, )]).toDF("age")(sample17.select(mean("age").alias("mean_age"), stddev("age").alias("stddev_age")).crossJoin(sample17).withColumn("age_scaled" , (col("age") - col("mean_age")) / col("stddev_age")))# +--------+----------+---+----------+
# |mean_age|stddev_age|age|age_scaled|
# +--------+----------+---+----------+
# |     2.0|       1.0|  1|      -1.0|
# |     2.0|       1.0|  2|       0.0|
# |     2.0|       1.0|  3|       1.0|
# +--------+----------+---+----------+

or

mean_age, sttdev_age = sample17.select(mean("age"), stddev("age")).first()
sample17.withColumn("age_scaled", (col("age") - mean_age) / sttdev_age)# +---+----------+
# |age|age_scaled|
# +---+----------+
# |  1|      -1.0|
# |  2|       0.0|
# |  3|       1.0|
# +---+----------+

If you want Transformer you can split vector into columns.

https://en.xdnf.cn/q/71620.html

Related Q&A

Pandas Dataframe - select columns with a specific value in a specific row

I want to select columns with a specific value (say 1) in a specific row (say first row) for Pandas Dataframe

PermissionError: [Errno 13] Permission denied in Django

I have encountered a very strange problem.Im working with django, I create a directory on server, and try to save pickle file into it, this way:with open(path, wb) as output: pickle.dump(obj, output, p…

Evaluating Jacobian at specific points using sympy

I am trying to evaluate the Jacobian at (x,y)=(0,0) but unable to do so. import sympy as sp from sympy import * import numpy as np x,y=sp.symbols(x,y, real=True) J = Function(J)(x,y) f1=-y f2=x - 3*y*(…

PyAudio cannot find any output devices

When I run:import pyaudio pa = pyaudio.PyAudio() pa.get_default_output_device_info()I get:IOError: No Default Output Device AvailableWhen I say:pa.get_device_count()It returns 0L.And of course if I lis…

How do I write a Hybrid Property that depends on a column in children relationship?

Lets say I have two tables (using SQLAlchemy) for parents and children:class Child(Base):__tablename__ = Childid = Column(Integer, primary_key=True) is_boy = Column(Boolean, default=False)parent_id = C…

Python insert a line break in a string after character X

What is the python syntax to insert a line break after every occurrence of character "X" ? This below gave me a list object which has no split attribute error for myItem in myList.split…

How to use properly Tensorflow Dataset with batch?

I am new to Tensorflow and deep learning, and I am struggling with the Dataset class. I tried a lot of things and I can’t find a good solution. What I am trying I have a large amount of images (500k+)…

How to handle a huge stream of JSON dictionaries?

I have a file that contains a stream of JSON dictionaries like this:{"menu": "a"}{"c": []}{"d": [3, 2]}{"e": "}"}It also includes nested dict…

datatype for handling big numbers in pyspark

I am using spark with python.After uploading a csv file,I needed to parse a column in a csv file which has numbers that are 22 digits long. For parsing that column I used LongType() . I used map() func…

Multi processing code repeatedly runs

So I wish to create a process using the python multiprocessing module, I want it be part of a larger script. (I also want a lot of other things from it but right now I will settle for this)I copied the…